How should we think about thinking? In this sequence I outline an epistemology called meta-rationality, which in my opinion improves on existing epistemologies in a number of important ways. This first post introduces some of the key concepts in meta-rationality (in particular the concept of frames), provides some of the intellectual context behind my conception of meta-rationality, and lists the main claims I’ll be making in the next half-dozen posts. Those posts, which focus on introducing meta-rationality, constitute the first half of the sequence, and will be posted over the next month; the second half of the sequence focuses on more complex aspects of meta-rationality, and will be posted over the following month or two.

The traditional approach to epistemology is to focus on our knowledge of propositions like “there is a red car in the garage” or “I’m feeling thirsty”, which can in principle be evaluated as true or false. At a high level, meta-rationality is about making epistemology less reductionist by focusing less on assigning credences to isolated propositions like these, and more on the larger-scale mental entities which we actually use when thinking about complex domains—entities including:

  • Ideologies like environmentalism, neoliberalism, communism, longtermism, etc
  • Scientific paradigms like darwinism, keynesianism, quantum physics, deep learning, etc
  • Life philosophies like stoicism, conformism, careerism, etc
  • Moral drives like egalitarianism, patriotism, compassion, etc
  • Epistemologies like empiricism, scientism, various schools of rationalism, etc
  • Persistent personality traits like openness to experience, ambition, narcissism, etc
  • Wide-ranging heuristics like “follow common-sense advice” or “move as fast as possible”

I’ll call these frames.[1] I’ll very roughly define a frame as a cluster of mental entities and processes (such as concepts, beliefs, heuristics, instincts, habits, skills, mental models, desires, values, etc) which tend to be active in conjunction with each other. Under this definition, the extent to which a group of traits qualifies as a frame is a matter of degree (although for convenience I'll mostly talk about frames in binary terms), and we might focus on frames at different levels of abstraction in different contexts. For example, the concept of the voltage across a component in an electrical circuit tends to be associated with concepts like electrical current and resistance; and at a higher level, with knowledge about how to design circuits, and knowledge of electrical engineering more generally; and at an even higher level, with the idea that the world can be understood mechanistically and scientifically. I’ll primarily focus on high-level frames which apply across a broad range of contexts, but many of my conclusions apply to low-level frames too. (For an in-depth exploration of low-level frames, such as the ones learned by young children, I recommend Minsky’s Society of Mind.)

The key contention of this sequence is that we should think about thinking primarily in terms of frames and the interactions between them. I’ll be giving detailed arguments for this claim in later posts; for now, I want to start off by appealing to high-level intuitions. We’re all familiar with the feeling of having strong beliefs that we can’t persuade others of, because they’re derived not from straightforward arguments or direct evidence, but from our background perspective on the world, which itself has been learned from many different datapoints. We don’t always apply the same perspective, though—we often act very differently in different contexts, or with different people. A particularly striking example: even people who are very good at building up a coherent understanding of complex domains, like leading scientists, often have strong inconsistencies between different parts of the worldview—e.g. ignoring contradictions between religion and science by treating them as non-overlapping magisteria. Another compelling example comes from ethics, where people who strongly buy into utilitarianism  an intellectual level often still feel viscerally compelled to “inefficiently” help those around them.

These observations have often been used to argue that we’re less “rational” than we think. But we could treat them instead as puzzles: what does it say about our minds, that enforcing consistency across disparate domains is so difficult and unnatural? My answer in this sequence: it’s because our minds should be seen as collections of many different frames, typically applied in different contexts, with consistency within each frame being a much higher priority than consistency between frames. We can think of the classic skills of rationality (like assigning credences to beliefs, or doing cost-benefit analyses of different strategies) as primarily applying within each frame separately.

By contrast, I characterize meta-rationality as the ability to deal well with multiple (possibly contradictory) frames in the presence of cognitive limitations which prevent us from fully merging them. A central example of meta-rationality is the worldview diversification approach used by Open Philanthropy in order to allocate money to different worldviews, like worldviews focused on animal welfare vs climate change vs AI risk. Within each worldview, they’re able to directly compare the expected utility of different interventions. But this doesn’t work well when comparing interventions favored by different worldviews, since worldviews each value very different things—a problem which they solve by allocating a separate budget for each worldview.

Meta-rationality isn’t my own term for this type of thinking—it was previously explored by David Chapman in his book In the Cells of the Eggplant, from which I draw many key ideas. Like his book, this sequence can be viewed as a reaction against rationalist approaches to epistemology which focus on the importance of reasoning (often by constructing formal frameworks for reasoning about reasoning). The rationalist approaches I critique most extensively are bayesianism (especially the version popularized by Yudkowsky, which I term bayesian rationalism) and critical rationalism (as laid out by Popper and Deutsch in particular). My critiques shouldn’t be taken as a wholesale repudiation of these epistemologies though—rather, it’s precisely because I think they are such significant improvements on their predecessors that I consider it worthwhile to identify and improve upon their mistakes. (However, familiarity with the thinkers discussed in this paragraph isn’t crucial for understanding this sequence; and although Chapman is broadly on board with the way I present meta-rationality, we don’t agree on everything.)

In doing so, the three strands of thought I draw on most extensively are:

  • Various philosophical pushback against rationalism, especially by Kuhn, Feyerabend and other philosophers of science, much of which is summarized by Chapman under the banner of meta-rationality.
  • Work by Scott Garrabrant and Abram Demski which attempts to adapt the bayesian approach to deal with computational limitations and other difficulties of real-world decision-making—in particular Garrabrant induction and extensions to itgeometric rationality, and embedded agency more generally.
  • The fields of machine learning (especially the subfields of deep learning and reinforcement learning) and neuroscience (especially the theory of predictive coding), which contains many important insights about cognition.

In the following posts, I’ll synthesize these lines of thinking by laying out in more detail some of the key properties I take frames to have, and their implications for how we should think:

  1. Frames are more general than the entities which other epistemologies are formulated in terms of (see Post #1: Frames in Context).
  2. Frames overlap with each other, and allow us to interpret the same phenomena in different ways (see Post #2: Operating Frames).
  3. Frames reflect modular structure in how we learn and reason about the world (see Post #3: Acquiring and Improving Frames).
  4. Frames are constructed from existing concepts and other frames (see Post #4: Constructing New Frames).
  5. Frames can gain influence over our decisions by predicting new data and retrodicting old data (see Post #5: Evaluating Frames’ Predictions).
  6. Frames can contain normative content about what we should do, not just epistemic content about what we should believe, and interact with each other to give rise to our decisions (see Post #6: Evaluating Frames’ Decisions).

I intend to later add another six posts covering more complex interactions between frames; but for now I’m focusing on the basics. I hope that, after reading these first six posts, the concept of frames will seem clear and natural. But I’m aware that the definition of frames above is a very vague one; and in general I’ll talk about frames in a non-technical way (albeit often drawing high-level links to specific algorithms). So it would be reasonable for readers to be concerned that meta-rationality is too philosophical and not sufficiently rigorous.

This concern could be mitigated by presenting meta-rationality in other ways. For example, we could potentially define it as an extension of Garrabrant induction, as discussed in the next post. Or we could perhaps define it using the terminology of predictive coding, by equating frames with hierarchical generative models, as Jan Kulveit points out in a comment. However, while each of these formulations could add rigor, I’m not currently confident that they could capture the key intuitions about meta-rationality I’m trying to convey—especially its practical implications for how we can actually think better. And I'm wary of the streetlight effect that comes from focusing on the most easily-formalized ideas: I'd rather be approximately right than precisely wrong. So my current priority is to outline an informal version of meta-rationality which describes at a high level what I see as the most important aspects of human (and potentially artificial) cognition; and to leave further formalization for future work.

  1. ^

      I like the term “frame” for a few reasons. It’s a standard term in various therapeutic and relational practices (e.g. when talking about “holding a frame”) and has been used in many previous discussions on LessWrong. But it’s also related to the “frame problem” in artificial intelligence and philosophy, of how to define the scope of variables that are affected by a single action (as I’ll discuss in later posts). Some other options that seem worse: “worldview” and “ideology” capture the connection between empirical and normative content, but are too large-scale: most people only subscribe to a handful of ideologies, if that. “Lens” has the opposite problem: calling something a lens treats it as primarily doing interpretive work, whereas the things I’m thinking of are going far beyond just interpreting data. “Cultural complex” is good except for the connotation of being only a cultural phenomenon, since I’m focused on how individuals learn, apply and modify frames. “Memeplex” is probably the second-best option.

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 9:14 PM

This feels like an attempt to bootstrap Kegan fours to fives. I don't know of any model that explains when and why people make the transition. One hypothesis I have is that it involves what in Kegan four is the universal quantifier and in Kegan five turns into 'meta all the way up' or otherwise collapsing the infinite self referential stack by finding a flexible enough representation that it can take itself as an argument. When this happens while you're identified with the representation it causes a weird phenomenological effect.

I don't think I agree with you but, David Chapman who is cited in this post as coining meta-rationality wrote a mini-e-book about Kegans work so you are right that they are very related. https://meaningness.com/about-my-sites