Randomini

Randomini's Posts

Sorted by New

Randomini's Comments

What explanatory power does Kahneman's System 2 possess?

For me there are two key components: the transition of a task from an S2 task to an S1 task through repetition and hypothesizing/internalising heuristics, and the use of S1 subtasks to solve more difficult S2 tasks. As an example, consider how mathematical operations move from being S2 to S1 in human learning processes.

Consider a child that can count up and down on the integers - i.e. given an integer, we can apply the increment function and get the next integer, or the decrement function to get the previous one. This is a S1 task, where the result of the operation is taken as "just-so". At that moment addition is still a S2 task for them, and one they solve through repeated application of S1 subtasks: one approach to solve A+B is to sequentially and repeatedly increment A and decrement B until B=0, at which point your incremented result is the answer.

With enough practice, the child learns the basic rules of addition, and it becomes so deeply ingrained that addition is now an S1 task. Multiplication, however, is still S2 to them, but might be solved like this: to multiply A and B, start with a C=0, and then decrement A every time you add B to C. Once A=0, C=A*B. Through enough repetition, they internalise this algorithm (or learn many examples of it by rote) and multiplication might be an S1 task now.

By now you can hopefully see where I'm going - exponentiation is the analogous S2 task on the next level up, and there's an algorithm a learner might perform to decompose it into a sequence of S1 tasks. (Of course, outside the realm of mathematics S2 tasks may be much fuzzier, e.g. puzzling over ethical dilemmas.)

The interesting thing about this (to me) is that the transition from S2 task to S1 task is the critical time where systemic errors and biases may be introduced. I see this as analogous to how a neural net can underfit/overfit training data, depending on the heuristics that are learned. With this analogy, training a neural network transitions from a difficult S2 task into an S1, black-box-esque input/output mapper. This can provide rapid "intuitive" results for us in the same way as S1 human thinking does - but is similarly error-prone.