Cross posted from my personal blog.

Last month I finally got round to reading The Eureka Factor by John Kounios and Mark Beeman, a popular book summarising research on 'insightful' thinking. I first mentioned it a couple of years ago after I'd read a short summary article, when I realised it was directly relevant to my recurring 'two types of mathematician' obsession:

The book is not focussed on maths – it’s a general interest book about problem solving and creativity in any domain. But it looks like it has a very similar way of splitting problem solvers into two groups, ‘insightfuls’ and ‘analysts’. ‘Analysts’ follow a linear, methodical approach to work through a problem step by step. Importantly, they also have cognitive access to those steps – if they’re asked what they did to solve the problem, they can reconstruct the argument.

‘Insightfuls’ have no such access to the way they solved the problem. Instead, a solution just ‘pops into their heads’.

Of course, nobody is really a pure ‘insightful’ or ‘analyst’. And most significant problems demand a mixed strategy. But it does seem like many people have a tendency towards one or the other.

I wasn't too sure what I was getting into. The replication crisis has made me hyperaware of the dangers of uncritically accepting any results in psychology, and I'm way too ignorant of the field to have a good sense for which results still look plausible. However, the book turned out to be so extraordinarily Relevant To My Interests that I couldn't resist writing up a review anyway.

The final chapters had a few examples along the lines of '[weak environmental effect] primes people to be more/less insightful', and I know enough to stay away from those, but the earlier parts look somewhat more solid to me. I haven't made much effort to trace back references, though, and I could easily still be being too credulous.

(I didn't worry so much about replication with my previous post on the Cognitive Reflection Test. Getting the bat and ball question wrong is hardly the kind of weak effect that you need a sensitive statistical instrument to detect. It's almost impossible to stop people getting it wrong! I did steer clear of any more dubious priming-style results, though, like the claim that people do better on the CRT when reading it 'in a disfluent font'.)

Insight and intuition

First, it's worth getting clear on exactly what Kounious and Beeman mean by 'insight'. As they use it, insight is a specific type of creative thinking, which they define more generally as 'the ability to reinterpret something by breaking it down into its elements and recombining these elements in a surprising way to achieve some goal.' Insight is distinguished by its suddenness and lack of conscious control:

When this kind of creative recombination takes place in an instant, it’s an insight. But recombination can also result from the more gradual, conscious process that cognitive psychologists call “analytic” thought. This involves methodically and deliberately considering many possibilities until you find the solution. For example, when you’re playing a game of Scrabble, you must construct words from sets of letters. When you look at the set of letters “A-E-H-I-P-N-Y-P” and suddenly realize that they can form the word “EPIPHANY,” then that would be an insight. When you systematically try out different combinations of the letters until you find the word, that’s analysis.

Insights tend to have a few other features in common. Solving a problem by insight is normally very satisfying: the insight comes into consciousness along with a small jolt of positive affect. The insight itself is usually preceded by a longer period of more effortful thought about the problem. Sometimes this takes place just before the moment of insight, while at other times there is an 'incubation' phase, where the solution pops into your head while you've taken a break from deliberately thinking about it.

I'm not really going to get into this part in my review, but the related word 'intuition' is also used in an interestingly specific sense in the book, to describe the sense that a new idea is lurking beneath the surface, but is not consciously accessible yet. Intuitions often precede an insight, but have a different feel to the insight itself:

This puzzling phenomenon has a strange subjective quality. It feels like an idea is about to burst into your consciousness, almost as though you’re about to sneeze. Cognitive psychologists call this experience “intuition,” meaning an awareness of the presence of information in the unconscious mind — a new idea, solution, or perspective — without awareness of the information itself, at least until it pops into consciousness.

Insight problems

To study insight, psychologists need to come up with problems that reliably trigger an insight solution. One classic example discussed in The Eureka Factor is the Nine Dot Problem, where you are asked to connect the following 3 by 3 grid of black dots using only four lines, without retracing or taking your pen off the page:

Image alt-text

If you've somehow avoided seeing this puzzle before, think about it for a while first. I've put the solution and my discussion of it in a spoiler block below:

A solution can be found in the Wikipedia article on insight problems here. It'll probably look irritatingly obvious once you see it. The key feature of the solution is that the lines you draw have to extend outside the confines of the square of dots you start with (thus spawning a whole subgenre of annoying business literature on 'thinking outside the box'). Nothing in the rules forbids this, but the setup focusses most people's attention on the grid itself, and breaking out of this mindset requires a kind of reframing, a throwing away of artificially imposed constraints. This is a common characteristic of insight problems.

This characteristic also makes insight hard to test. For testing purposes, it's useful to have a large stock of similar puzzles in hand. But a good reframing like the one in the Nine Dot Problem tends to be a bit of a one-off: once you've had the idea of extending the lines outside the box, it applies trivially to all similar puzzles, and not at all to other types of puzzle.

(I talked about something similar in my last post, on the Cognitive Reflection Test. The test was inspired by one good puzzle, the 'bat and ball problem', and adds two other questions that were apparently picked to be similar. Five thousand words and many comments later, it's not obvious to me or most of the other commenters that these three problems form any kind of natural set at all.)

Kounios and Beeman discuss several of these eyecatching 'one-off' problems in the book, but their own research that they discuss is focussed on a more standardisable kind of puzzle, the Remote Associates Test. This test gives you three words, such as

PINE CRAB SAUCE

and asks you to find the common word that links them. The authors claim that these can be solved either with or without insight, and asked participants to self-categorise their responses as either fitting in the 'insightful' or 'analytic' categories:

The analytic approach is to consciously search through the possibilities and try out potential answers. For example, start with “pine.” Imagine yourself thinking: What goes with “pine”? Perhaps “tree”? “Pine tree” works. “Crab tree”? Hmmm … maybe. “Tree sauce”? No. Have to try something else. How about “cake”? “Crab cake” works. “Cake sauce” is a bit of a reach but might be acceptable. However, “pine cake” and “cake pine” definitely don’t work. What else? How about “crabgrass”? That works. But “pine grass”? Not sure. Perhaps there is such a thing. But “sauce grass” and “grass sauce” are definitely out. What else goes with “sauce”? How about “applesauce”? That’s good. “Pineapple” and “crab apple” also work. The answer is “apple”!

This is analytical thought: a deliberate, methodical, conscious search through the possible combinations. But this isn’t the only way to come up with the solution. Perhaps you’re trying out possibilities and get stuck or even draw a blank. And then, “Aha! Apple” suddenly pops into your awareness. That’s what would happen if you solved the problem by insight. The solution just occurs to you and doesn’t seem to be a direct product of your ongoing stream of thought.

This categorisation seems suspiciously neat, and if I rely on my own introspection for solving one of these (which is obviously dubious itself) it feels like more of a mix. I'll often generate some verbal noise about cakes and trees that sounds vaguely like I'm doing something systematic, but the main business of solving the thing seems to be going on nonverbally elsewhere. But I do think there's something there – the answer can be very immediate and 'poppy', or it can surface after a longer and more accessible process of trying plausible words. This was tested in a more objective way by seeing what people do when they don't come up with the answer:

Insightfuls made more “errors of omission.” When waiting for an insight that hadn’t yet arrived, they had nothing to offer in its place. So when the insight didn’t arrive in time, they let the clock run out without having made a guess. In contrast, Analysts made more “errors of commission.” They rarely timed out, but instead guessed – sometimes correctly – by offering the potential solution they had been consciously thinking about when their time was almost up.

Kounios and Beeman's research focussed on finding neural correlates of the 'aha' moment of insight, using a combination of an EEG test to pinpoint the time of the insight, and fMRI scanning to locate the brain region:

We found that at the moment a solution pops into someone’s awareness as an insight, a sudden burst of high-frequency EEG activity known as “gamma waves” can be picked up by electrodes just above the right ear. (Gamma waves represent cognitive processing in the brain, such as paying attention to something or linking together different pieces of information.) We were amazed at the abruptness of this burst of activity—just what one would expect from a sudden insight. Functional magnetic resonance imaging showed a corresponding increase in blood flow under these electrodes in a part of the brain’s right temporal lobe called the “anterior superior temporal gyrus” (see figure 5.2), an area that is involved in making connections between distantly related ideas, as in jokes and metaphors. This activity was absent for analytic solutions.

So we had found a neural signature of the aha moment: a burst of activity in the brain’s right hemisphere.

I'm not sure how settled this is, though. I haven't tried to do a proper search of the literature, but certainly a review from 2010 describes the situation as very much in flux:

A recent surge of interest into the neural underpinnings of creative behavior has produced a banquet of data that is tantalizing but, considered as a whole, deeply self-contradictory.

(The book was published somewhat later, in 2015, but mostly cites research from prior to this review, such as this paper.)

As an outsider it's going to be pretty hard for me to judge this without spending a lot more time than I really want to right now. However, regardless of how this holds up, I was really interested in the authors' discussion of why a right-hemisphere neural correlate of insight would make sense.

Insight and context

One of the authors, Mark Beeman, had previously studied language deficits in people who had suffered brain damage to the right hemisphere. One such patient was the trial attorney D.B.:

What made D.B. “lucky” was that the stroke had damaged his right hemisphere rather than his left. Had the stroke occurred in the mirror-image left-hemisphere region, he would have experienced Wernicke’s aphasia, a profound deficit of language comprehension. In the worst cases, people with Wernicke’s aphasia may be completely unable to understand written or spoken language.

Nevertheless, D.B. didn’t feel lucky. He may have been better off than if he’d had a left-hemisphere stroke, but he felt that his language ability was far from normal. He said that he “couldn’t keep up” with conversations or stories the way he used to. He felt impaired enough that he had stopped litigating trials—he thought that it would have been a disservice to his clients to continue to represent them in court.

D.B. and the other patients were able to understand the straightforward meanings of words and the literal meanings of sentences. Even so, they complained about vague difficulties with language. They failed to grasp the gist of stories or were unable to follow multiple-character or multiple-plot stories, movies, or television shows. Many didn’t get jokes. Sarcasm and irony left them blank with incomprehension. They could sometimes muddle along without these abilities, but whenever things became subtle or implicit, they were lost.

An example of the kind of problem D.B. struggled with is the following:

Saturday, Joan went to the park by the lake. She was walking barefoot in the shallow water, not knowing that there was glass nearby. Suddenly, she grabbed her foot in pain and called for help, and the lifeguard came running.

If D.B. was given a statement about something that occurred explicitly in the text, such as 'Joan went to the park on Saturday', he could say whether it was true or false with no problems at all. In fact, he did better than all of the control subjects on these sorts of explicit questions. But if he was instead presented with a statement like 'Joan cut her foot', where some of the facts are left implicit, he was unable to answer.

This was interesting to me, because it seems so directly relevant to the discussion last year on 'cognitive decoupling'. This is a term I'd picked up from Sarah Constantin, who herself got it from Keith Stanovich:

Stanovich talks about “cognitive decoupling”, the ability to block out context and experiential knowledge and just follow formal rules, as a main component of both performance on intelligence tests and performance on the cognitive bias tests that correlate with intelligence. Cognitive decoupling is the opposite of holistic thinking. It’s the ability to separate, to view things in the abstract, to play devil’s advocate.

The patients in Beeman's study have so much difficulty with contextualisation that they struggle with anything at all that is left implicit, even straightforward inferences like 'Joan cut her foot'. This appears to match with other evidence from visual half-field studies, where subjects are presented with words on either the right or left half of the visual field. Those on the left half will go first to the right hemisphere, so that the right hemisphere gets a head start on interpreting the stimulus. This shows a similar difference between hemispheres:

The left hemisphere is sharp, focused, and discriminating. When a word is presented to the left hemisphere, the meaning of that word is activated along with the meanings of a few closely related words. For example, when the word “table” is presented to the left hemisphere, this might strongly energize the concepts “chair” and “kitchen,” the usual suspects, so to speak. In contrast, the right hemisphere is broad, fuzzy, and promiscuously inclusive. When “table” is presented to the right hemisphere, a larger number of remotely related words are weakly invoked. For example, “table” might activate distant associations such as “water” (for underground water table), “payment” (for paying under the table), “number” (for a table of numbers), and so forth.

Why would picking up on these weak associations be relevant to insight? The story seems to be that this tangle of secondary meanings - the 'Lovecraftian penumbra of monstrous shadow phalanges' - works to pull your attention away from the obvious interpretation you're stuck with, helping you to find a clever new reframing of the problem.

This makes a lot of sense to me as a rough outline. In my own experience at least, the kind of thinking that is likely to lead to an insight experience feels softer and more diffuse than the more 'analytic' kind, more a process of sort of rolling the ideas around gently in your head and seeing if something clicks than a really focussed investigation of the problem. 'Thinking too hard' tends to break the spell. This fits well with the idea that insights are triggered by activation of weak associations.

Final thoughts

There's a lot of other interesting material in the book about the rest of the insight process, including the incubation period leading up to an insight flash, and the phenomenon of 'intuitions', where you feel that an insight is on its way but you don't know what it is yet. I'll never get through this review if I try to cover all of that, so instead I'm going to finish up with a couple of weak associations of my own that got activated while reading the book.

I've been getting increasingly dissatisfied with the way dual process theories split cognition into a fast/automatic/intuitive 'System 1' and a slow/effortful/systematic 'System 2'. System 1 in particular has started to look to me like an amorphous grab bag of all kinds of things that would be better separated out.

The Eureka Factor has pushed this a little further, by bringing out a distinction between two things that normally get lumped under System 1 but are actually very different. One obvious type of System 1-ish behaviour is routine action, the way you go about tasks you have done many times before, like making a sandwich or walking to work. These kinds of activities require very little explicit thought and generally 'just happen' in response to cues in the environment.

The kind of 'insightful' thinking discussed in The Eureka Factor would also normally get classed under System 1: it's not very systematic and involves a fast, opaque process where the answer just pops into your head without much explanation. But it's also very different to routine action. It involves deliberately choosing to think about a new situation, rather than one you have seen many times before, and a successful insight gives you a qualitatively new kind of understanding. The insight flash itself is a very noticeable, enjoyable feature of your conscious attention, rather than the effortless, unexamined state of absorbed action.

This was pointed out to me once before by Sarah Constantin, in the comments section of her Distinctions in Types of Thought:

You seem to be lumping “flashes of insight” in with “effortless flow-state”. I don’t think they’re the same. For one thing, inspiration generally comes in bursts, while flow-states can persist for a while (driving on a highway, playing the piano, etc.) Definitely, “flashes of insight” aren’t the same type of thought as “effortful attention” — insight feels easy, instant, and unforced. But they might be their own, unique category of thought. Still working out my ontology here.

I'd sort of had this at the back of my head since then, but the book has really brought out the distinction clearly. I'm sure these aren't the only types of thinking getting shoved into the System 1 category, and I get the sense that there's a lot more splitting out that I need to do.

I also thought about how the results in the book fit in with my perennial 'two types of mathematician' question. (This is a weird phenomenon I've noticed where a lot of mathematicians have written essays about how mathematicians can be divided into two groups; I've assembled a list of examples here.) 'Analytic' versus 'insightful' seems to be one of the distinctions between groups, at least. It seems relevant to Poincaré’s version, for instance:

The one sort are above all preoccupied with logic; to read their works, one is tempted to believe they have advanced only step by step, after the manner of a Vauban who pushes on his trenches against the place besieged, leaving nothing to chance.

The other sort are guided by intuition and at the first stroke make quick but sometimes precarious conquests, like bold cavalrymen of the advance guard.

In fact, Poincaré once also gave a striking description of an insight flash himself:

Just at this time, I left Caen, where I was living, to go on a geologic excursion under the auspices of the School of Mines. The incidents of the travel made me forget my mathematical work. Having reached Coutances, we entered an omnibus to go some place or other. At the moment when I put my foot on the step, the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry. I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty. On my return to Caen, for conscience' sake, I verified the result at my leisure.

If the insight/analysis split is going to be relevant here, it would require that people favour either 'analytic' or 'insight' solutions as a general cognitive style, rather than switching between them freely depending on the problem. The authors do indeed claim that this is the case:

Most people can, and to some extent do, use both of these approaches. A pure type probably doesn’t exist; each person falls somewhere on an analytic-insightful continuum. Yet many—perhaps most—people tend to gravitate toward one of these styles, finding their particular approach to be more comfortable or natural.

This is based on their own research where they recorded participant's self-report of whether they were using a 'insight' or 'analytic' approach to solve anagrams, and compared it with EEG recordings of their resting state. They found a number of differences, including more right-hemisphere activity in the 'insight' group, and lower levels of communication between the frontal lobe and other parts of the brain, indicating a more disorderly thinking style with less top-down control. This may suggest more freedom to allow weak associations between thoughts to have a crack at the problem, without being overruled by the dominant interpretation.

Again, and you're probably got very bored of this disclaimer, I have no idea how well the details of this will hold up. That's true for pretty much every specific detail in the book that I've discussed here. Still, the link between insight and weak associations makes a lot of sense to me, and the overall picture certainly triggered some useful reframings. That seems very appropriate for a book about insight.

New to LessWrong?

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 2:00 AM
the Nine Dot Problem, where you are asked to connect the following 3 by 3 grid of black dots using only four lines:

The problem (as described) is trivial - it's missing a constraint. From the article linked:

The task is to connect all 9 dots using exactly 4 straight lines, without retracing or removing one's pen from the paper.

Oops, I fixed that in my blog version and then accidentally posted the old draft here. Edited now, thank you!