(Cross-posted from my personal blog, Dialectics of Nature)

Psychologist Daniel Kahneman, whose study of human biases laid the foundation for modern behavioral economics, in his book Thinking, Fast and Slow outlines a dichotomy: human thinking is performed in two different ways. Fast and instinctive thinking of "System 1" and slower, deliberate thinking of "System 2."

A question one would employ type 1 thinking to find a solution to is "what is 6x3?" A question suited for type 2 thinking is "what is 6x33?" The mechanism used by the human brain to arrive at those is fundamentally, substantially different. When asked about an explanation, the asked person would produce similar step-by-step arithmetical solutions to both. However, in the type 1 thinking, the process explained did not occur. No math-savvy person adds three sixes when faced with such a simple calculation. This, or any other valid breakdown, will be made up after-the-fact, as opposed to "6x33" question, where the person asked how they concluded on the result would likely answer truthfully, giving an account of arithmetics actually performed in their heads.

A person prompted with "6x3" does not add three numbers so fast that they don't even notice it. They do not make the calculation at all, even unconsciously. The operation of producing the answer is certainly more similar to information retrieval than calculation. But rather than accessing some storage space for results of trivial arithmetic, the answer is encoded within neural connections. It is connectionist and behavioral, rather than symbolic and cognitive. By benefit of constant reuse (behavioral aspect), this knowledge is programmed into weights of the biological neural network: the human brain (connectionist aspect). The only thing that happens in the brain confronted with "2x2" is neurons firing in a sequence producing 4. Weights are sub-symbolic, and there is no explicit mathematical representation, any recognizable intermediate steps.

In type 2 thinking, one uses one's brain to emulate symbolic information-processing, just as one emulates a virtual machine on one's desktop. One creates algorithms, plans, and puts effort into imagining numbers, patterns, and structures. An imaginary canvas for equations appears; it is an act of conjuring the symbolic representation. In the posthumanism of Stiegler's Technics and Time, the 'faculty of symbolization' is an interaction of the human mind with its environment—the externalization of the central nervous system. In type 2 thinking, 'one thinks, one invents, one makes discoveries—but they have acquired, unnoticed, a displaced, "symbolic" meaning.'

And, biologically, the cerebral cortex is just an augmentation of the monkey brain the human has—the outermost layer, subjugated by a primitive ape to build airplanes and write poetry. This gives transhumanists hope for a tertiary level: a silicon superintelligence attached to the cortex, which would work for the cortex just as the cortex works for the brain's inner parts. It's a promise of superhuman intelligence without losing what it means to be a human.

Type 2 thinking is what was used by top-down AIs of the XX century. Pre-programmed expert systems, deploying logic and probability to make decisions. Type 1 thinking is largely the only thinking currently used in artificial intelligence. Type 2 might be emergent out of the heavily connectionist-behaviorist approach, just as it is with biological intelligence, but the deep neural networks of today make no explicit symbolic reasoning. That is what creates the issues with interpretability. How is it possible to interpret how a network arrived at its conclusion when no meaningful symbolic process was employed? And making the AI explain its actions would make them use type 1 thinking to generate a valid (sounding) reasoning that fits the already obtained result.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 6:41 PM

Any analogy between the different ways the human brain operates and the different ways that machine learning algorithms operate is very loose, and I think it is important to keep that distinction and not think that we are learning much about one when we study the other. Yes there are issues with the lack of interpretability in neural network models, but the system 1 / system 2 dichotomy doesn't shed any useful light on them.

Motivation for the post was Kahneman himself using the system 1 / system 2 as comparison when talking about NN / symbolic AI, and the clear connection between Stiegler's philosophy and that dichotomy.

Of course, human brain and deep neural networks are not the same, but for example DeepMind advocates for using one to learn about the other:

"We believe that drawing inspiration from neuroscience in AI research is important for two reasons. First, neuroscience can help validate AI techniques that already exist. Put simply, if we discover one of our artificial algorithms mimics a function within the brain, it suggests our approach may be on the right track. Second, neuroscience can provide a rich source of inspiration for new types of algorithms and architectures to employ when building artificial brains. Traditional approaches to AI have historically been dominated by logic-based methods and theoretical mathematical models. We argue that neuroscience can complement these by identifying classes of biological computation that may be critical to cognitive function."

A relevant example given in the article is studying firing properties of dopamine neurons in the mammalian basal ganglia, in which insights from reinforcement learning are applied for neurophysiological research.

I think the connection drawn in the post is valuable as it points to considering Stiegler's work in the symbolic / connectionist AI context, which I think would be valuable for philosophical problems we encounter when designing fair or trustworthy AI.