In the previous post, I've said,

[...] in both schemes, Factored Cognition includes a step that is by definition non-decomposable. In Ideal Debate, this is step is judging the final statement. In HCH, it is solving a problem given access to the subtrees. This step is also entirely internal to the human.

You can take this as a motivation for part two of the sequence, which is about how humans think. I think a good place to start here is by reflecting on the argument against Factored Cognition based on intuition. Here is a version made by Rohin Shah on the AI alignment podcast:

[...] I should mention another key intuition against [the Factored Cognition Hypothesis]. We have all these examples of human geniuses like Ramanujan, who were posed very difficult math problems and just immediately get the answer and then you ask them how did they do it and they say, well, I asked myself what should the answer be? And I was like, the answer should be a continued fraction. And then I asked myself which continued fraction and then I got the answer. And you're like, that does not sound very decomposable. It seems like you need these magic flashes of intuition. Those would be the hard cases for factored cognition. [...]

This sounds sort of convincing, but what is this intuition thing? Wikipedia says that...

Intuition is the ability to acquire knowledge without recourse to conscious reasoning.

... which I take to represent the consensus view. However, I don't think it's accurate. Consider the following examples:

  1. I throw you a ball, and you catch it. We know that your brain had to do something that effectively approximates Newtonian physics to figure out where the ball was headed, but you're not consciously aware of any such process.
  2. I ask you to compute . I predict that your brain just 'spat out' the right answer, without any conscious computation on your part.
  3. I show you a mathematical conjecture that you immediately feel is true. I ask you why you think it's true, you think about it for two minutes, and manage to derive a short proof. We know that this proof is not the reason why you thought it was true to begin with.

It's evident that your brain is acquiring knowledge without recourse to conscious reasoning in all three cases. That means they would all involve intuition, according to Wikipedia's definition. Nonetheless, we would not defer to intuition for any of them.

This leads us to this post's conjecture:

Under this view, intuition has nothing to do with how you derived a result and everything with whether you can explain the result after the fact. This characterization fits all three of the above examples (as well as any others I know of):

  1. The ability to catch a ball does not feel impressive, hence it does not feel like it requires explaining.[1]
  2. You could easily prove that using a lower-level concept like addition, hence you would not defer to intuition for the result.
  3. In this case, you might well defer to intuition initially, when I first ask you about why the conjecture is true, and you (intuitively) think it is. But as soon as you have the proof in hand, you would refer to the proof instead. In other words, as we change your best explanation for the result, our verdict on whether it is intuition changes as well, which shows that it can't possibly be about how the result was derived.

As an aside: the standard definition says 'intuition is [...]', whereas my proposed characterization says 'we refer to intuition for [...]'. Why? Because intuition is not a well-defined category. Whether we call something intuition depends on the result itself and on the rest of our brain, which means that any accurate characterization somehow has to take the rest of the brain into account. Hence the 'we refer to [...]' wording.


The classical view of intuition leads to a model of thinking with two separate modes: the 'regular' one and the 'intuition' one. This post asks you to replace that model with a unified one: there is only one mode of thinking, which sometimes yields results we can explain, and other times results we can't explain.

Provided you buy this, what this means is that we have dissolved the concept. Intuition isn't a mode of thinking, it's just something we say depending on our ability to explain our thoughts. So, that's great! It means we have nothing to worry about! Factored Cognition works! Haha, just kidding. It's actually closer to the opposite. Yes, there is only one mode of thinking, but that's because all thinking is intuition-like, in the sense that the vast majority of steps are hidden.

To see this, all you need to do is look at examples. Do you have access to the computations your brain does to compute the 'for-tee' thought that pops into your head whenever you read the symbols ? Now summon the mental image of a hammer. Did you have access to the computations your brain did to construct this image? Or, you can go back to catching that ball. In all those cases (and others), our brain provides us zero access to inspect what it is doing. That's just how awareness works. Our brain shows us the results, but that's it. The algorithms are hidden.

I think this view is very compatible with a scientific understanding of the brain, and much more so than anything that positions intuition as a special category of thought. But more on that in the next post.


Given the single-process model, let's revisit the Ramanujan example. What does this kind of thing mean for Factored Cognition?

The immediate thing the example shows that the [computations your brain can run without consulting you] can be quite long. Unlike in the case where your brain did something unimpressive, Ramanujan did something that most people probably couldn't replicate even if they had a month to spend on the problem.

Let's linger on this for a bit. In a private conversation, TurnTrout has called the 'computations your brain can run without consulting you' the 'primitives [of] your cognitive library'. I think this is a cool term. Note that 'primitive' indicates an indivisible element, so they are only primitives from the perspective of awareness. For example, a primitive of most people's library is 'compute the destination of a flying ball', and another is 'compute '. If you're a computer scientist or mathematician, your library probably has a primitive for , whereas if you're a normal person, it probably doesn't, so you would have to compute as

And even then, some of these steps won't be primitives but will require a sequence of smaller primitives. For example, the step from 512 might be computed by , where each of those steps uses a primitive.

Similarly, if you're Ramanujan, your brain has a very complicated primitive that immediately suggests a solution for some set of problems. If you're a professional chess player, your library probably has all sorts of primitives that map certain constellations of chess boards to estimates of how promising they are. And so on. I think this is a solid framework under which to view this post. However, note that it's just descriptive: I'm saying that viewing your mental capabilities as a set of primitives is an accurate description of what your brain is doing and what about it you notice; I'm not saying that each primitive corresponds to a physical thing in the brain.

Then, just to recap, the claim is that 'it's intuition' is something we say whenever our primitives produce results we can't explain after the fact, and the concept doesn't refer to anything beyond that.

EXERCISE (OPEN-ENDED): If you agree with the post so far, think about what this might or might not imply for Factored Cognition, and why. Is it different for the Ideal Debate FCH and the HCH FCH?

Open-ended means that I'm not going to set a time limit, and I won't try to answer the question in this post, so you can think about it for as long as you want.

‎‎The reason why it might be a problem is that Factored Cognition likes to decompose things, but we cannot decompose our cognitive primitives. This leaves Factored Cognition with two options:

  1. help a human generate new primitives; or
  2. get by with the human's existing primitives.

The first option begs the question of how new primitives are created. As far as I can tell, the answer is usually experience + talent. People develop their primitives by having seen a large number of problem instances throughout their career, which means that the amount of experience can be substantial A common meme, which is probably not literally true, is that it takes 10000 hours to achieve mastery in a field. Anything in that range is intractable for either scheme.

This doesn't mean no new primitives are generated throughout the execution of an HCH tree, since all learning probably involves generating some primitives. However, it does exclude a large class of primitives that could be learned in a scheme that approximates one human thinking for a long time (rather than many humans consulting each other). By using someone who already is an expert as the human, it's possible to start off with a respectable set of primitives, but then that set cannot be significantly expanded.


  1. As a further demonstration, consider what would happen if you studied the ball case in detail. If you internalized that your brain is doing something complicated, and that we have no clue what's really going on, you might gradually be tempted to say that we use intuition after all. If so, this demonstrates that explanations are the key variable. ↩︎

New to LessWrong?

New Comment
1 comment, sorted by Click to highlight new comments since: Today at 3:15 AM

Maybe the problem with intuition is that we misplace our wonder.

Intuition usually works like this:

(Magic) -> Behavior -> Proof of success

For example:

  • (Magic) -> Run and hold up your glove -> Feel the ball land in your glove
  • (Magic) -> Guess that 8 * 5 = 40 -> Observe that your final calculation is correct
  • (Magic) -> Guess that the conjecture is true -> Create a proof that other mathematicians agree is sound

We find it magical that we can't observe the background thinking. "Wow, how is it possible that I knew the answer and can't explain why?"

But it makes more sense to me that conscious thought is magical. The fact that we can observe our own thoughts and behavior depends on mental faculties beyond what lets us accomplish those thoughts and behaviors in the first place. An eagle can fly, but we wouldn't be surprised to learn that it doesn't know how.

If Ramanujan could explain his own intuitions about his conjectures, that would be even more wondrous than the fact that he could generate his conjectures without any explanation. Ditto if a baseball player could explain exactly how they knew how to catch a particular pop fly.

Calculators can compute arithmetic. It would take a lot of extra work for them to also produce an explanation for how they computed each specific answer. Likewise, it's far easier to create a neural network that can classify images than to get that neural network to "explain itself."

So the magic isn't that we don't understand how our brain produces knowledge. The real magic is that we have any access to or executive control over our own thoughts at all.

Of course, being able to extend that account of where our thoughts come from, as you are trying to do, is also part of the real magic.