On Intelligence is a book I've read as part of my quest to understand neuroscience. It attempts to develop a unified theory of the neocortex meant to serve as a blueprint for Artificial Intelligence. I think of the book as being structured into three parts.

Part one: Artificial Intelligence and Neural Networks OR skip ahead to part two if you want to read about the cool neuroscience rather than about me lamenting the author's lack of epistemic rigor

This part is primarily about a single claim: building AI requires understanding the human brain. Depending on how you count, Jeff says this nine times in just the prologue and first chapter. To justify it, he tells us the story of how he came into contact with the field of artificial intelligence. Then and now, he laments that people in the field talk about intelligence without trying to understand the brain, whereas neuroscientists talk about the brain without trying to develop a high-level theory of intelligence. Neural networks are a small step in the right direction, but he quickly got disillusioned with them as they don't go nearly far enough; their connection to the brain is quite loose and high-level. The conclusion is apparent: someone has to bring neuroscience into AI, and only then will the field succeed. And since no-one else is doing it, Jeff steps up; that's what the book is for.

The picture he lays out makes a lot of sense if you take the claim as a given. The flaw is that he neglects to argue why it is true.

I think it's pretty hard to make excuses here. This isn't a dinner conversation; it's a 250-page book that explicitly sets out to reform an entire field. It's a context where we should expect the highest level of epistemic rigor that the author is capable of, especially given how much emphasis he puts on this point. However, after rereading this part of the book, the only evidence I can find that supports AI requiring an understanding of the brain is the following:

  • The observation that current AI architectures are not like the brain, which I think is uncontroversial but doesn't prove anything
  • A claim that you can't do AI without understanding intelligence, and you can't understand intelligence without understanding the brain, which isn't an argument unless you already believe that intelligence is inherently tied to the neocortex.
  • A claim that current AI approaches have failed. This one may have been decent evidence in 2004, which is when the book was published, but it has aged rather poorly. I think more than half of the things-AI-can't-do that Jeff names in the book are things it can do in 2020, and that's without methods getting any closer to imitating the brain or neocortex. In particular, we didn't yet have GPT-3.
  • The Chinese Room thought experiment. (A non-Chinese-speaker in a room who is handed Chinese symbols and a long list of rules to manipulate them may answer complex questions without ever understanding anything; this is analogous to what GPU does; hence a GPU is not intelligent.)

And that -- there's no nice way to put it -- is weak. I believe I am more open to the idea that studying the brain is the best way to build AI than most of LW (although certainly not the only way), but you have to actually make an argument!

I think one of the most valuable cognitive fallacies Eliezer Yudkowsky has taught me (in both writing and video) is the conjunction fallacy. It's valuable because it's super easy to grasp but still highly applicable since public intellectuals commit it all the time. I think what Jeff is doing in this book is a version of that. It has the caveat that the story he tells doesn't have that many specific claims in it, but it's still telling a story as a substitute for evidence. To add a bit of personal speculation, an effect I've noticed in my own writing is that I tend to repeat myself more than weaker my arguments are, perhaps feeling the need to substitute repetition for clarity. The first part of this book feels like that.

The most perplexing moment comes at the end of the second chapter. Jeff mentions an important argument against his claim: that humans in the past have commonly succeeded in copying the 'what' of evolution without studying the 'how'. The car is nothing like the Cheetah, the airplane is nothing like a falcon, and so on. A great point to bring up -- but what's the rebuttal? Well, there's a statement that he disagrees with it, another reference to the Chinese Room, and that's it. It's as if merely acknowledging the argument is an indisputable refutation.

As someone who thinks rationality is a meaningful concept, I think this kind of thing matters for the rest of the book. If he can delude himself about the strength of his argument, why shouldn't I expect him to delude himself about his theories on the neocortex?

On the other hand, Jeff Hawkins seems to have a track record of good ideas. He's created a bunch of companies, written a programming language, and built a successful handwriting recognition tool.

My speculative explanation is that he has something like a bias toward simple narratives and cohesive stories, which just so happens to work out when you apply it to understanding the neocortex. This is true both for practical reasons (having a flawed theory may be more useful than having no theory at all), but also for epistemic reasons: if there is a simple story to tell about the neocortex (and I don't think that's implausible), then perhaps Jeff, despite his flaws, has done an excellent job uncovering it. I'm not sure whether he did, but at least the rest of the book didn't raise any red flags comparable to those in part one.

Let's get to his story.

Part two: The Brain, Memory, Intelligence, and the Neocortex

(If anyone spots mistakes in this part, please point them out.)

The Human Brain

Jeff is a skilled writer, and his style is very accessible. He tends to write simply, repeat himself, and give a lot of examples. The upshot is that a short list of his main claims can sum up most of this relatively short chapter.

  • If you scan the neocortex (which is the part of the brain where intelligence and consciousness are located), you see that it has hierarchical structure.
  • You also see that it looks quite similar everywhere. This is an observation Jeff emphasizes a lot and compares to Einstein's idea that the speed of light is constant everywhere (an insight from which coming up with special relativity was 'easy'). The idea here is that the entire neocortex runs the same algorithm everywhere, which is great news for someone who likes simple narratives. If you have read Steve's posts about the brain, you may notice that this is a point they agree on.
  • While different parts of the neocortex generally do different things (some are responsible for vision, some for audio, etc.), it is remarkably flexible. In particular, if you look at the neocortex of a blind person, the part that's usually responsible for vision is now doing other things. Very cool.
  • Even though we experience different senses like vision and smell very differently, they all reduce to the same type of thing in the neocortex. In particular, the neocortex is full of little fibers called axons, and every input reduces to patterns of many different axons firing. This is probably easy to grasp for anyone reading this since the same is true in computers: both songs and images are represented as sequences of bits.
  • Relatedly, Jeff claims it is possible for blind people to 'see' by installing a device that translates visual inputs into sequences of touch on the tongue. Also pretty cool, at least if it's true.

Memory

This chapter is largely a descriptive account of different properties of human memory. As far as I can tell, everything Jeff says here aligns with introspection.

Property #1: the neocortex stores sequences of patterns

This one is closely related to the point about type uniformity in the previous chapter. Since everything in the brain is ultimately reduced to a pattern of axons firing, we can summarize all of memory as saying that the neocortex memorizes sequences of patterns. The term sequence implies that order matters; examples here include the alphabet (hard to say backward. You probably have to say it forward every time to find the next letter) and songs (which are even harder to sing backward). Naturally, this applies to memories across all senses.

On the other hand, Jeff later points out how sequences can sometimes be recognized even if the order is changed. If you associate a sequence of visual inputs with a certain face, this also works if you move the order around (i.e., nose -> eye -> eye rather than eye -> nose -> eye). I don't think he addresses this contradiction explicitly; it's also possible that there's something I forgot or didn't understand. Either way, it doesn't sound like a big problem; it could just be that the differences can't be too large or that it depends on how strict the order usually is.

Property #2: The neocortex recalls patterns auto-associatively

This just means that patterns are associated with themselves so that receiving a part of a pattern is enough to recall the entire thing. If you only hear 10 seconds of a song, you can easily recognize it.

Property #3: The neocortex stores patterns in an invariant form

This one is super important for the remaining book: patterns don't respond to precise sensory inputs. Jeff never defines the term 'invariant'; the mathematical meaning I'm familiar with means 'unchanging under certain transformations'.[1] For example, the function is invariant under reflection on the -axis. Similarly, your representation of a song is invariant under change of starting note. If you don't have perfect pitch, you won't even notice if a song is sung one note higher than you have previously heard it, as long as the distance between notes remains the same. Note that, in this case, the two songs (the original one and the one-note-higher one) have zero overlap in the input data: none of their notes are the same.

Property #4: the neocortex stores patterns in a hierarchy

... but we'll have to wait for the chapter on the neocortex to understand what this means. Moving on.

A new framework of intelligence

The punchline in this chapter is that intelligence is all about prediction. Your neocortex constantly makes predictions about its sensory inputs, and you notice whenever those predictions are violated. You can see that this is true in a bunch of examples:

  • If someone moves your door handle two inches downward, you'll probably notice something is weird as you try to grab it (because your neocortex has memorized exactly how this movement is supposed to go)
  • If a background noise suddenly disappears, you may notice this disappearance (because it's a violated prediction), even if you hadn't even noticed the noise itself.

I would dispute that 'intelligence = prediction' rather than 'human intelligence = prediction'. For Jeff, this is a distinction without a difference since only human intelligence is real intelligence. He even goes as far as talking about 'real intelligence' in an earlier chapter.

How the neocortex works

We have now arrived at the heart of the book. This is the longest and by far the most technical and complicated chapter. To say something nice for a change, I think the book's structure is quite good; it starts with the motivation, then talks about the qualitative, high-level concepts (the ones we've just gone through), and finally about how they're implemented (this chapter). And the writing is good as well!

The neocortex has separate areas that handle vision, sound, touch, and so forth. In this framework, they all work similarly, so we can go through one, say the visual cortex, and largely ignore the rest.

We know (presumably from brain imaging) that the visual cortex is divided into four regions, which have been called , , , and . Each region is connected to the previous and the next one, and also receives visual inputs (not directly, but let's ignore whatever processing happens before that).

We also know that, in , there is a strong connection between axons and certain areas of the visual field. In other words, if you show someone an object that's at point , then a certain set of axons fire, and they won't fire if you move the same object somewhere else. Conversely, the axons in do not correspond to locations in the visual field but high-level concepts like 'chair' or 'chessboard'. Thus, if you show someone a ring and then move it around, the axons that fire will constantly change in but remain constant in .

You can probably guess the punchline: as information moves up the hierarchy from to , it also moves up the abstraction hierarchy. Location-specific information gets transformed into concept-specific information in a three-step process. (Three since .)

To do this, each region compresses the information and merely passes on a 'name' for the invariant thing it received, where a 'name' is a pattern of inputs. Then, learns patterns of these names, which (since the names are patterns themselves) are patterns of patterns. Thus, I refer to all of them simply as patterns; for , they're patterns of axons firing; at , they're patterns of patterns of patterns of patterns of axons firing. (I don't understand what if anything is the difference between a sequence of patterns and a pattern of patterns; I've previously called them sequences because I've quoted the book.)

In this model, each region remembers the finite set of names it has learned and looks to find them again in future inputs. This is useful to resolve ambiguity: if a region gets inputs where is somewhere between and , but the region knows that is a common pattern, it will interpret as rather than . That's why we can understand spoken communication even if the speaker doesn't talk clearly, and many of the individual syllables and even words aren't understandable by themselves. It's also why humans are better than current AI (or at least the system I have on my phone) at converting audio to text.

I've mentioned that the names passed on are invariant. This is a point I understand to be original from Jeff (the classical model has invariant representations at but not in the other regions). In , a region may pass on a name for 'small horizontal line segment' rather than the set of all pixels. In , it means that the same objects are recognized even if they are moved or rotated. To achieve this, Jeff hypothesizes that each region is divided into different subregions, whereas the number of these subregions is higher for the regions lower in the hierarchy (and only has one). I.e.:

(I've re-created this image, but it's very similar to the one from the book.)

On the one hand, I'm biased to like this picture as it fits beautifully with my post on Hiding Complexity. If true, the principle of hiding complexity is even more fundamental than what my post claims: not only is it essential for conscious thought, but it's also what your neocortex does, constantly, with (presumably all five kinds of) input data.

On the other hand, this picture seems uniquely difficult to square with introspection: my vision appears to me as a continuous field of color and light, not as a highly-compressed and invariant representation of objects. It doesn't even seem like invariance is present at the lowest level (e.g., different horizontal line segments don't look alike). Now, I'm not saying that this is a smackdown refutation; a ton of computation happens unconsciously, and I'm even open to the idea that more than one consciousness lives in my body. I'm just saying that this does seem like a problem worth addressing, which Jeff never does. Oh well.

Moving on -- why do the arrows go in both directions? Because the same picture is used to make predictions. Whenever you predict a low-level pattern like "I will receive tactile inputs because I recognized the high-level pattern of 'open the doorknob'", your brain has to take this high-level, invariant thing and translate it back into low-level patterns.

To add another bit of complexity, notice that an invariant representation is inevitably compressed, which means that it can correspond to more than one low-level pattern. In the case of opening the door, this doesn't apply too much (although even here, the low level may differ depending on whether or not you wear gloves), so take the case of listening to a song instead. Even without perfect pitch, you can often predict the next note exactly. This means that your neocortex has to merge the high-level pattern (the 'name' of the song) with the low-level pattern 'a specific note' to form the predict the next note.

If you've been thinking along, you might now notice that this requires connections that go to the next lower region to point to several possible subregions. (E.g., since patterns of are location-specific depending on which subregion they're in, but patterns in are not, the same pattern in needs to have the ability to reach many possible subregions in .)

According to Jeff, this is precisely the case: your neocortex is wired such that arrows that connections going up have limited targets, whereas connections going down can end up at all sorts of places. However, I'm not sure whether this is an independent piece of knowledge (in which case it would be evidence for the theory) or a piece he just hypothesizes to be true (in which case it would be an additional burdensome detail).

This is implemented with an additional decomposition of the neocortex into layers, which are orthogonal to regions. For the sake of this review, I'm going to hide that complexity and not go into any detail.

To overexplain the part I did cover, though, here is [my understanding of what happens if you hear the first few notes of a familiar song] in as much detail as I can muster:

  1. The notes get translated into patterns of axons firing, are recognized by , and passed up the hierarchy.
  2. At some point, one of the regions notices that the pattern it receives corresponds to the beginning of a pattern which represents the entire song (or perhaps a section of the song). This is good enough, so it sends the name for the entire song/section upward. (This step corresponds to the fact that memory is auto-associative.)
  3. The upshot of 1-2 is that you now recognize what song you're hearing.
  4. Since the neocortex predicts things constantly, it also tries to predict the upcoming auditory inputs.
  5. To do this, it transforms the high-level pattern corresponding to the song or section to a low-level pattern.
  6. There are several possible ways to do this, so the neocortex uses the incoming information of the precise note that's playing to decide which specific variation to choose. This is supported by the neocortex' architecture.
  7. Using this mechanism, information flows down the hierarchy into , where a pattern corresponding to the precise next note fires -- and does so before you hear it.

Relatedly, both Jeff and Steve say that about ten times as many connections are flowing down the hierarchy (except that Steve's model doesn't include a strict hierarchy) than up. Prediction is important. These connections flowing 'down' are called feedback, which is extremely confusing since they are the predictions, and the other direction, called feed-forward, are the feedback (in common sense) for these predictions.

One last detail for part two: the different parts of the cortex are not separate; rather, there are additional 'association' areas that merge several kinds of inputs. In fact, Jeff writes that most of the neocortex consists of association areas. This corresponds to the fact that inputs of one sense can trigger predictions about inputs of other senses. (If you see the train approaching, your neocortex will predict to also hear it soon.)

Part three: Consciousness, Creativity, and the Future of Intelligence

The final stretch of the book goes back to being non-technical and easily accessible.

The section on creativity is another place where I've drawn a strong connection to one of the posts I've written recently, this time the one on intuition. As far as I can tell, Jeff essentially makes the same point I made (which is that there is no meaningful separation, rather it's intuition all the way down), except that he calls it 'creativity'.

Now, on the one hand, it honestly seems to me that the use of 'creativity' here is just a confused way of referring to a concept that really wants to be called intuition.[2] On the other hand, maybe I'm biased. Anyone who read both pieces may be the judge.

The part about consciousness doesn't seem to me to be too interesting. Jeff wants to explain away the hard problem, simply stating that 'consciousness is what it feels like to have a neocortex'. He does spend a bit of time on why our input senses appear to us to be so different, even though they're all just patterns, which doesn't feel like one of the problems I would lose sleep over, but perhaps that's just me.

In 'The future of Intelligence', Jeff casually assures us that there is nothing to worry about from smart machines because they won't be anything like humans (and thus won't feel resentment for being enslaved). This was my favorite part of the book as it allowed me to reorient my career: instead of pursuing the speculative plan of writing about Factored Cognition in the hopes of minimally contributing AI risk reduction (pretty silly given that AI risk doesn't exist), my new plan is to apply for a company that writes software for self-parking cars.[3] Thanks, Jeff!

Appendix: 11 Testable Predictions!

... that I cannot evaluate because they're all narrowly about biology, but I still wanted to give Jeff credit for including them. They don't have probabilities attached.

Questions I (still) have:

Here are some:

  • How much support in the literature is there for the hierarchical structure? Jeff sure makes it sound like it's a closed case, but he rarely tells me which parts of his theory are certain and which are speculative.
  • If there is a hierarchical structure, why do all of our senses appear uniform -- especially given that most of the rest of the model agrees with introspection (which suggests that introspection is reliable)?
  • What is the difference between patterns and memory? My understanding of what Jeff says is that they're the same thing.
  • Where is the feeling that we are free to steer our thoughts to whatever topic we please coming from? I get that it's evolutionarily adaptive, but that's the why, not the how.

Verdict: should you read this book?

Maybe? Probably not? I'm not sure. Depends on how understandable the review was. Maybe if you want more details about the chapter on the neocortex in particular.

In any case, I think Steve's writing is altogether better, so if anything, I would only recommend the book if you've already read at least these two posts.

Note that Jeff has a new book coming out on 2021/03/02; it will be called A Thousand Brains: A New Theory of Intelligence.


  1. To be more specific, you generally don't say anything is invariant per-se, but that it's invariant under some specific transformation. E.g., the parabola defined by is invariant under the transformation defined by . ↩︎

  2. As I see it, 'creativity' refers to a property of the output, whereas intuition (appears to) refer to a mode of thinking. However, the two are closely linked in that the 'creativity' label almost requires that it was created by 'intuition'. Thus, if you look at it at the level of outputs, then creativity looks like a subset of intuition.

    Conversely, I would not consider intuition a subset of creativity, and the cases where something is done via intuition but not creative are exactly those where Jeff's explanation seems to me to fail. For example, he talks about the case of figuring out where the bathroom is in a restaurant you're visiting for the first time. To do this, you have to generalize from information about bathrooms in previous restaurants. Is this creativity? I would say no, but Jeff says yes. Is it intuition? I would say yes. In a nutshell, is why I feel like he is talking about intuition while calling it creativity: I think the set of things he calls creativity is very similar to the set of things most people call intuition, and less similar to the set of things most people call creativity. ↩︎

  3. By which I mean that the chapter has not caused me to update my position since it doesn't address any of the reasons why I believe that AI risk is high. ↩︎

New Comment
7 comments, sorted by Click to highlight new comments since:

my vision appears to me as a continuous field of color and light, not as a highly-compressed and invariant representation of objects.

One thing is: I have an artist friend who said that when he teaches drawing classes, he sometimes has people try to focus on and draw the "negative space" instead of the objects—like, "draw the blob of wall that is not blocked by the chair". The reason is: most people find it hard to visualize the 3D world as "contours and surfaces as they appear from our viewpoint", we remember the chair as a 3D chair, not as a 2D projection of a chair, except with conscious effort. The "blob of wall not blocked by the chair" is not associated with a preconception of a 3D object so we have an easier time remembering what it actually looks like from our perspective.

Another thing is: When I look at a scene, I have a piece of knowledge "that's a chair" or "this is my room" which is not associated in any simple way with the contours and surfaces I'm looking at—I can't give it (x,y) coordinates—it's just sorta a thing in my mind, in a separate, parallel idea-space. Likewise "this thing is moving" or "this just changed" feels to me like a separate piece of information, and I just know it, it doesn't have an (x,y) coordinate in my field of view. Like those motion illusions that were going around twitter recently.

Our conscious awareness consists of the patterns in the Global Neuronal Workspace. I would assume that these patterns involve not only predictions about the object-recognition stuff going on in IT but also predictions about a sampling of lower-level visual patterns in V2 or maybe even V1. So then we would get conscious access to something closer to the original pattern of incoming light. Maybe.

I dunno, just thoughts off the top of my head.

Thanks for those thoughts. And also for linking to Kaj's post again; I finally decided to read it and it's quite good. I don't think it helps at all with the hard problem (i.e., you could replace 'consciousness' with some other process in the brain that has these properties but doesn't have the subjective component, and I don't think that would pose any problems), but it helps quite a bit with the 'what is consciousness doing' question, which I also care about.

(Now I'm trying to look at the wall of my room and to decide whether I actually do see pixels or 'line segments', which is an exercise that really puts a knot into my head.)

One of the things that makes this difficult is that, whenever you focus on a particular part, it's probably consistent with the framework that this part gets reported in a lot more detail. If that's true, then testing the theory requires you to look at the parts you're not paying attention to, which is... um.

Maybe evidence here would be something like, do you recognize concepts in your peripheral vision more than hard-to-clasiffy-things and actually I think you do. (E.g, if I move my gaze to the left, I can still kind of see the vertical cable of a light on the wall even though the wall itself seems not visible.)

(Now I'm trying to look at the wall of my room and to decide whether I actually do see pixels or 'line segments', which is an exercise that really puts a knot into my head.)

Sorry if I'm misunderstanding what you're getting at but...

I don't think there's any point in which there are signals in your brain that correspond directly to something like pixels in a camera. Even in the retina, there's supposedly predictive coding data compression going on (I haven't looked into that in detail). By the time the signals are going to the neocortex, they've been split into three data streams carrying different types of distilled data: magnocellular, parvocellular, and koniocellular (actually several types of konio I think), if memory serves. There's a theory I like about the information-processing roles of magno and parvo; nobody seems to have any idea what the konio information is doing and neither do I. :-P

But does it matter whether the signals are superficially the same or not? If you do a lossless transformation from pixels into edges (for example), who cares, the information is still there, right?

So then the question is, what information is in (say) V1 but is not represented in V2 or higher layers, and do we have conscious access to that information? V1 has so many cortical columns processing so much data, intuitively there has to be compression going on.

I haven't really thought much about how information compression in the neocortex works per se. Dileep George & Jeff Hawkins say here that there's something like compressed sensing happening, and Randall O'Reilly says here that there's error-driven learning (something like gradient descent) making sure that the top-down predictions are close enough to the input. Close on what metric though? Probably not pixel-to-pixel differences ... probably more like "close in whatever compressed-sensing representation space is created by the V1 columns"...?

Maybe a big part of the data compression is: we only attend to one object at a time, and everything else is lumped together into "background". Like, you might think you're paying close attention to both your hand and your pen, but actually you're flipping back and forth, or else lumping the two together into a composite object! (I'm speculating.) Then the product space of every possible object in every possible arrangement in your field of view is broken into a dramatically smaller disjunctive space of possibilities, consisting of any one possible object in any one possible position. Now that you've thrown out 99.999999% of the information by only attending to one object at a time, there's plenty of room for the GNW to have lots of detail about the object's position, color, texture, motion, etc. 

Not sure how helpful any of this is :-P

For the hard problem of consciousness, the steps in my mind are

1. GNW -->
2. Solution to the meta-problem of consciousness -->
3. Feeling forced to accept illusionism -->
4. Enthusiastically believing in illusionism.

I wrote the post Book Review: Rethinking Consciousness about my journey from step 1 --> step 2 --> step 3. And that's where I'm still at. I haven't gotten to step 4, I would need to think about it more. :-P

Thanks for writing this nice review!

I agree about part 1. I don't think there's a meta-level / outside-view argument that AGI has to come from brain-like algorithms—or at least it's not in that book. My inside-view argument is here and I certainly don't put 100% confidence in it.

Interestingly, while airplanes are different from birds, I heard (I think from Dileep George) that the Wright Brothers were actually inspired by soaring birds, which gave them confidence that flapping wings were unnecessary for flight.

Jeff is a skilled writer

Well, the book was coauthored by a professional science writer if I recall... :-P

(If anyone spots mistakes in [part 2], please point them out.)

It's been a while, but the one that springs to mind right now is Jeff's claim (I think it's in this book, or else he's only said it more recently) that all parts of the neocortex issue motor commands. My impression was that only the frontal lobe does. For example, I think Jeff believes that the projections from V1 to the superior colliculus are issuing motor commands to move the eyes. But I thought the frontal eye field was the thing moving the eyes. I'm not sure what those projections are for, but I don't think motor commands is the only possible hypothesis. I haven't really looked into the literature, to be clear.

Relatedly, both Jeff and Steve say that about ten times as many connections are flowing down the hierarchy (except that Steve's model doesn't include a strict hierarchy) than up.

I might have gotten it from Jeff. Hmm, actually I think I've heard it from multiple sources. "Not a strict hierarchy" is definitely something that I partly got from Jeff—not from this book, I don't think, but from later papers like this.

I think it's a great book and anyone interested in the brain at a well informed layperson level would probably enjoy it and learn a lot from it.

Hawkins makes a good case for a common cortical algorithm - the studies involving ferrets whose visual nerves were connected to the audio centres and who learned to see are one compelling piece of evidence. He makes some plausible arguments that he has identified one key part of the algorithm - hierarchical predictive models - and he relates it in some detail to cortical micro-architecture. It is also quite interesting how motor control can be seen as a form of predictive algorithm (though frustratingly this is left at the hand-waving level and I found it surprisingly hard to convert this insight into code!). 

His key insight is that we learn by recognizing temporal patterns, and that the temporal nature of our learning is central. and I suspect this has a lot of truth to it and remains under-appreciated.

There are clear gaps that he kind of glosses over e.g. how neuronal networks produce higher level mental processes like logical thought. So it is not perfect and is not a complete model of the brain. I would definitely read his new book when it comes out.

It is also quite interesting how motor control can be seen as a form of predictive algorithm (though frustratingly this is left at the hand-waving level and I found it surprisingly hard to convert this insight into code!). 

I'd be interested if you think my post Predictive Coding and Motor Control is helpful for filling in that gap.