NB: Originally published on Map and Territory on Medium. This is an old post originally published on 2016-09-10. It was never previously cross-posted or linked on LessWrong, so I'm adding it now for posterity. It's old enough that I can no longer confidently endorse it, and I won't bother trying to defend it if you find something wrong, but it might still be interesting.
I find Kegan’s model of psychological development extremely useful. Some folks I know disagree on various grounds. These are some accumulated responses to critiques I’ve encountered.
Before we dive into these critiques, though, allow me to attempt a brief introduction to the theory (though this is a tough undertaking, as we’ll discuss below). Robert Kegan, later along with Lisa Lahey, put forward a theory of developmental psychology rooted in complexity of meaning making. It is influenced and builds on the work of Piaget, but also Erikson and Kohlberg, who extended developmental psychology to consider the possibility of adult development.
Kegan’s theory focuses on the maximally complex models people can make of the world in near construal mode. Development is along a continuous gradient but with clear “levels” where a particular kind of complexity is fully available to the thinker. These are classified from 1 to 5 and can be summarized in many ways, though fundamentally they correspond to when a person can form fully-articulable models of things, relationships between things, systems, relationships between systems, and systems of systems (holons), respectively.
This is an exceedingly dense introduction, though, and I know of no good short explanation. The best resources on the topic remain Kegan’s seminal The Evolving Self and his later In Over Our Heads for a more approachable, example-laden presentation.
The first, and perhaps strongest, critique of Kegan is that it’s very hard for anyone to explain it. Kegan begins In Over Our Heads with the story of getting a letter from a student assigned to read The Evolving Self. The student writes that The Evolving Self is full of interesting ideas but gets so frustrated trying to make sense of it that he wants to “punch [Kegan] in the teeth”.
Partly this is because of Kegan has a strong literary and classics background, so The Evolving Self is full of very precise language with subtle meanings, many so subtle that Kegan takes multipage digressions to explain them. But In Over Our Heads and his later books written with Lahey and other coauthors use more familiar language and yet still leave people confused.
The theory seems to defy simple explanation. I’ve yet to find one written by Kegan, Lahey, or anyone else that was able to reliably convey in less than 40,000 words a reasonably coherent and complete view of it. As one person I know put it, the theory reads to him as analogous to someone saying there are invisible dragons, undetectable by readily available means, but which you will notice if you devote at least 20 hours to the study of invisible dragons.
Yet, those of us who have put in the time to “see the invisible dragons” tend to be pretty excited about the theory. Kegan gives us a way to understand and construct many aspects of human behavior and thought that time and again prove consistent and reflective of reality. So if it works so well, why is it so hard to explain?
There are a few ways things can be hard to explain. One is that they are unintuitive. Physics is like this: we perceive the world as if it operated the way Aristotle imaged it worked, but it turns out this approximation breaks down at extremes and the quest to find a complete theory forces us to consider ever more exotic phenomena.
Another way things may be hard to explain is that they’re complicated. Machines and living things are like this, with engineers and biologists mostly struggling to make clear what’s happening in systems where lots of details matter. A clock might fail if it’s missing a tooth on one gear or a frog might die if it’s missing a sequence in its DNA, and understanding why is a messy business of picking through tightly interwoven threads of causality.
But perhaps the most vexing way something can be hard to explain is when it’s complex. That is to say, even if it has few details and works in a straightforward manner, thinking through how it works is still hard. Game theory, economics, and most everything touched by mathematics is like this: just a few “simple” rules lead to bewildering complexity under combination.
So when trying to explain a theory like Kegan’s that has at its heart a developmental progression in human capacity to cope with complexity, it’s perhaps unsurprising that the complexity can collapse back in on itself and make the theory look like disjointed rubble. The theory, in fact, predicts this, because it’s one about the relationships between systems (i.e. the change in human meaning making over time), so by its own expectations it will prove difficult to gain an intuitive grasp on without the reader having themselves first already attained the capacity to naturally reason about relationships between systems in near construal mode (level 4 in Kegan’s model).
To most people this feels like the theory saying “you can’t understand it until you already understand it”, but there’s more going on here. It’s instead saying that Kegan’s developmental theory belongs to a class of things that cannot be fully understood without the ability to naturally, intuitively work with the relationships between systems. Without that ability it may be understood in other ways, in particular using far construal mode, but that is demanding on the level of learning algebra, calculus, or differential equations, which is to say something that even the brightest among us struggle with.
But if it’s really this hard, why do people feel they can reject Kegan when they can’t reject, say, abstract algebra in the same way? They may find they completely lack the capacity to understand what’s going on in abstract algebra in near mode, yet aside from a few mathematicians with technical objections, no one thinks abstract algebra fails to model well the parts of reality it is attempting to model whether they understand it or not. At worst it’s just some of that “math stuff” other people worry about but they don’t “get”.
The difference, as Robin Hanson has observed in the general case, is that Kegan is a theory about stuff we are intimately familiar with: people. We are happy to defer to experts and theories we don’t understand on topics we don’t feel we have much of a grasp on, like abstract algebra, but as things get progressively more “real” we feel less inclined to trust complex theories experts put forward that we don’t understand ourselves.
There’s a sort of escalating scale of how many people don’t trust experts that’s a function of distance from lived experience, social agreements on who has expertise, and availability of evidence to check our understanding. Basically everyone trusts experts in math because it’s far from lived experience and we agree that mathematicians are the math experts even though we can only easily validate the veracity of the simplest mathematical claims without training.
Most, but slightly fewer, folks trust experts in physics. People agree that physicists are the experts and have lots of evidence to prove their unintuitive theories are right (planes fly, electricity powers our devices, computers work with no moving parts). The only difficulty for physicists is that we all live physics, so there’s a constant battle against violations of intuition they must overcome to convince us of their theories.
Less trusted still are doctors, economists, and philosophers. Somewhere between economists and philosophers we find anyone attempting to explain human behavior. We all have lots of experience with it, there’s lots of evidence around to check against, so the only thing holding up the experts is that we agree their expertise exists because someone gave them an advanced degree in it.
So in general people feel free to reject arguments about human behavior that don’t seem intuitive even if they are provided by experts. It’s for the same meta-reason that no one listens to the economists and philosophers have been engaged in the same discussions for millennia: it feels easy to reject what doesn’t feel true when it’s something we have a lot of experience with and can easily gather data on.
Is this why folks who find it hard to understand Kegan often choose to reject it? I suspect probably yes, but then again I’m asking you to accept my argument about human behavior concerning belief strength in theories people don’t fully comprehend and are not expert in, so I’ll leave my response to this critique here before I ascend too far up a house of cards.
So suffice to say, Kegan is complex, complex enough that it’s predictably hard to understand, and about a topic where we little trust experts.
Kegan is sometimes presented as “wrong” because it’s not always accurate. That is to say, because it’s a theory that presents a model of the world, it has edge cases at which it seems to break down. This is a standard objection to all models in all domains and is uninteresting, but since there is a high likelihood of confusion due to lack of trust in expertise here, it’s worth covering.
A model is some explanation and prediction of how the world works. For example, atomic theory gives us a way of understanding matter as indivisible (atomic) particles. Like all theories, it’s “wrong” in that reality is not actually made up of atoms — it’s just reality. Instead atoms are a way of understanding reality that let us explain phenomena we see and predict future phenomena with some degree of accuracy. To the extent that atomic theory predicts what happens in reality, it is useful to the purpose of predicting future events. This doesn’t make it “right”, just predictive enough for our needs.
When atomic theory fails to make correct predictions it’s not “wrong”. Instead it’s that the theory is not complete because it’s a model and not reality and the only perfect model of reality is reality itself, just as the only perfect map of the Earth is the Earth itself.
So Kegan’s developmental theory is naturally not a perfect predictor of reality. We can only judge it by how accurate it is for the things we want to use it for. Whether or not it’s accurate enough to be useful is what we’ll explore in the remaining critiques.
The remaining two major objections are technical in that they assume an understanding of Kegan and find problems on internal grounds. Feel free to just skip to the end if these are not of interest to you.
The first problem is that Kegan differentiates expectations of what you can do in near and far mode. I’ll note here, though, that Kegan does not explicitly reference construal level theory, dual process theory, or any other two part theory of mind. This is mostly an artifact of its time: Kegan wrote The Evolving Self before these theories were well developed, and instead spends a decent number of words to explain that he’s focused on the capacity of intuitive, immediate, natural ratiocination.
Having lacked a referent to contrast near and far modes, some people naturally object to the theory on the grounds that mathematicians, for example, show incredible capacity to reason about holons in their 20s despite lacking the behavior patterns expected of someone with this capacity. The difference, of course, is that mathematicians do their work in far mode, and are in fact exceptionally talented at thinking in far mode, but because far mode is not used to engage in day-to-day activity because it’s too complicated to fit in far mode, that far mode capacity for handling greater complexity does not extend to near mode.
It’s unclear whether capacity to handle complexity in near mode extends to far mode. It seems likely but there’s not much data on, for example, people becoming mathematicians in their 50s after struggling with math for the previous 5 decades.
The second objection is that Kegan is not directly testable because it’s a theory about changes in the way of meaning making which is inherently unobservable since it exists only as a dialectic between perception and reality. While it may be true that you can’t directly test if the model is how reality is structured, this is a problem for all theories of mind and it has the same solution as they do: you can test the predictions. We can check whether the expected behavior of people at particular Kegan levels correlates with their actual behavior.
There’s unfortunately very little data on this. About the best we have comes from Lahey and her work in applying Kegan’s model to education reform and management consulting, and most of the available data I’m aware of is collected post level assessment or informally, so it’s suspect. I happily concede this is a major issue and would love to see more data collected but consider it unlikely because in the past 30 years the theory has gained little traction, largely it seems due to its complexity, so not enough people are working in the area to generate the needed data to sufficiently test the theory.
I’ve tried to address here the most common objections I’ve encountered to Kegan’s theory. If there are additional objection categories I’ve left out that you notice, feel free to bring them up in the comments, and I’ll see if they are tractable problems or tear the whole thing down.
If reading this has piqued your interest in Kegan’s work, I highly recommend reading In Over Our Heads and The Evolving Self in that order. For applications of the theory you can check out Kegan and Lahey’s later works and for a philosophical incorporation of Kegan I suggest reading David Chapman.