There are two interpretations of this post, weak and strong.

Weak interpretation:

I describe a framework about "three levels of exploration". I use the framework to introduce some of my ideas. I hope that the framework will give more context to my ideas, making them more understandable. I simply want to find people who are interested in exploring ideas. Exploring just for the sake of exploring or for a specific goal.

Strong interpretation:

I use the framework as a model of intelligence. I claim that any property of intelligence boils down to the "three levels of exploration". Any talent, any skill. The model is supposed to be "self-evident" because of its simplicity, it's not based on direct analysis of famous smart people.

Take the strong interpretation with a lot of grains of salt, of course, because I'm not an established thinker and I haven't achieved anything intellectual. I just thought "hey, this is a funny little simple idea, what if all intelligence works like this?", that's all.

That said, I'll need to make a couple of extraordinary claims "from inside the framework" (i.e. assuming it's 100% correct and 100% useful). Just because that's in the spirit of the idea. Just because it allows to explore the idea to its logical conclusion. Definitely not because I'm a crazy man. You can treat the most outlandish claims as sci-fi ideas.

A formula of thinking?

Can you "reduce" thinking to a single formula?

Can you show a single path of the best and fastest thinking?

Well, there's an entire class of ideas which attempt to do this in different fields, especially the first idea: (the first two ideas are well-known to everyone here)

My idea is just another attempt at reduction. You don't have to treat such attempts 100% seriously in order to find value in them. You don't have to agree with them.

Three levels of exploration

Let's introduce my framework.

In any topic, there are three levels of exploration:

  1. You study a single X.
  2. You study types of different X. Often I call those types "qualities" of X.
  3. You study types of changes (D): in what ways different X change/get changed by a new thing Y. Y and D need to be important even outside of the (main) context of X.

The point is that at the 2nd level you study similarities between different X directly, but at the 3rd level you study similarities indirectly through new concepts Y and D. The letter "D" means "dynamics".

I claim that any property of intelligence can be boiled down to your "exploration level". Any talent, any skill and even more vague things such as "level of intentionality". I claim that the best and most likely ideas come from the 3rd level. That 3rd level defines the absolute limit of currently conceivable ideas. So, it also indirectly defines the limit of possible/conceivable properties of reality.

You don't need to trust those extraordinary claims. If the 3rd level simply sounds interesting enough to you and you're ready to explore it, that's good enough. I'll discuss some "core questions" about the framework at the end of the post.

Three levels simplified

A vague description of the three levels:

  1. You study objects.
  2. You study qualities of objects.
  3. You study changes of objects.


  1. You study a particular thing.
  2. You study everything.
  3. You study abstract ways (D) in which the thing is changed by "everything".


  1. You study a particular thing.
  2. You study everything.
  3. You study everything through a particular thing.

So yeah, it's a Hegelian dialectic rip-off. Down below are examples of applying my framework to different topics. You don't need to read them all to join the discussion, of course.

Exploring debates

1. Argumentation

I think there are three levels of exploring arguments:

  1. You judge arguments as right or wrong. Smart or stupid.
  2. You study types of arguments. Without judgement.
  3. You study types of changes (D): how arguments change/get changed by some new thing Y. ("dynamics" of arguments)

If you want to get a real insight about argumentation, you need to study how (D) arguments change/get changed by some new thing Y. D and Y need to be important even outside of the context of explicit argumentation.

For example, Y can be "concepts". And D can be "connecting/separating" (a fundamental process which is important in a ton of contexts). You can study in what ways arguments connect and separate concepts.

A simplified political example: a capitalist can tend to separate concepts ("bad things are caused by mistakes and bad actors"), while a socialist can tend to connect concepts ("bad things are caused by systemic problems"). Conflict Vs. Mistake is just a very particular version of this dynamic[1]. Different manipulations with concepts create different arguments and different points of view. You can study all such dynamics. You can trace arguments back to fundamental concept manipulations. It's such a basic idea and yet nobody has done it[2]. Aristotle has done it 2400 years ago, but for formal logic.

Arguments: conclusion

I think most of us are at the level 1 in argumentation: we throw arguments at each other like angry cavemen without studying what an "argument" is and/or what dynamics it creates. If you completely unironically think that "stupid arguments" exist, then you're probably on the 1st level. Professional philosophers are at the level 2 at best, but usually lower (they are surprisingly judgemental). At least they are somewhat forced to be tolerant to the most diverse types of arguments due to their profession.

On what level are you? Have you studied arguments without judgement?

2. Understanding/empathy

I think there are three levels in understanding your opponent:

  1. You study a specific description (X) of your opponent's opinion. You can pass the Ideological Turing Test in a superficial way. Like a parrot.
  2. You study types of descriptions of your opponent's opinion. ("Qualities" of your opponent's opinion.) You can "inhabit" the emotions/mindset of your opponent.
  3. You study types of changes (D): how the description of your opponent's opinion changes/get changed by some new thing Y. D and Y need to be important even outside of debates.

For example, Y can be "copies of the same thing" and D can be "transformations of copies into each other". Such Y and D are important even outside of debates.

So, on the 3rd level you may be able to describe the opponent's position as a weaker version/copy of your own position (Y) and clearly imagine how your position could turn out to be "the weaker version/copy" of the opponent's views. You can imagine how opponent's opinion transforms into truth and your opinion transforms into a falsehood (D).

Other interesting choices of Y and D are possible. For example, Y can be "complexity of the opinion [in a given context]"; D can be "choice of the context" and "increasing/decreasing of complexity". You can run the opinion of your opponent through different contexts and see how much it reacts to/accommodates the complexity of the world.

Empathy: conclusion

I think people very rarely do the 3rd level of empathy.

Doing it systematically would lead to a new political/epistemological paradigm.

Exploring philosophy

1. Beliefs and ontology

I think there are three levels of studying the connection between beliefs and ontology:

  1. You think you can see the truth of a belief directly. For example, you can say "all beliefs which describe reality in a literal way are true". You get stuff like Naïve Realism. "Reality is real."
  2. You study types of beliefs. You can say that all beliefs of a certain type are true. For example, "all mathematical beliefs are true". You get stuff like Mathematical Universe Hypothesis, Platonisim, Ontic Structural Realism... "Some description of reality is real."
  3. You study types of changes (D): how beliefs change/get changed by some new thing Y. You get stuff like Berkeley’s subjective idealism and radical probabilism and Bayesian epistemology: the world of changing ideas. "Some changing description of reality is real."

What can D and Y be? Both things need to be important even outside of the context of explicit beliefs. A couple of versions:

  • Y can be "semantic connections". D can be "connecting/separating [semantic connections]". Both things are generally important, for example in linguistics, in studying semantic change. We get Berkeley's idealism.
  • Y can be "probability mass" or some abstract "weight". D can be "distribution of the mass/weight". We get probabilism/Bayesianism.

Thinking at the level of semantic connections should be natural to people, because they use natural language and... neural nets in their brains! (Berkeley makes a similar argument: "hey, folks, this is just common sense!") And yet this idea is extremely alien to people epistemology-wise and ontology-wise. I think the true potential of the 3rd level remains unexplored.

Beliefs: conclusion

I think most rationalists (Bayesians, LessWrong people) "oscillate" between the 2nd level and the 1st level, even though they have some 3rd level tools.

Eliezer Yudkowsky "oscillates"[3] between the 1st level and the 3rd level: he likes level 1 ideas (e.g. "map is not the territory"), but has a bunch of level 3 ideas ("some maps are the territory") about

  • math: math is a system that exists "out there", but can (hypothetically) be changed;
  • ethics: ethics is a system that exists "out there", but can be modified;
  • decision theory: "decision logic" is a system that exists out there, but can be changed;
  • Security Mindset: on one hand "safety" of a system is a subjective property, but it's also an objective property that exists "out there".

2. Ontology and reality

I think there are three level of exploring the relationship between ontologies and reality:

  1. You think that an ontology describes the essence of reality.
  2. You study how different ontologies describe different aspects of reality.
  3. You study types of changes (D): how ontologies change/get changed by some other thing Y. D and Y need to be important even outside of the topic of (pure) ontology.

Y can be "human minds" or simply "objects". D can be "matching/not matching" or "creating a structure" (two very basic, but generally important processes). You get Kant's "Copernican revolution" (reality needs to match your basic ontology, otherwise information won't reach your mind: but there are different types of "matching" and transcendental idealism defines one of the most complicated ones) and Ontic Structural Realism (ontology is not about things, it's about structures created by things) respectively.

On what level are you? Have you studied ontologies/epistemologies without judgement? What are the most interesting ontologies/epistemologies you can think of?

3. Philosophy overall

I think there are three levels of doing philosophy in general:

  1. You try to directly prove an idea in philosophy using specific philosophical tools.
  2. You study types of philosophical ideas.
  3. You study types of changes (D): how philosophical ideas change/get changed by some other thing Y. D and Y need to be important even outside of (pure) philosophy.

To give a bunch of examples, Y can be:

I think people did a lot of 3rd level philosophy, but we haven't fully committed to the 3rd level yet. We are used to treating philosophy as a closed system, even when we make significant steps outside of that paradigm.

Exploring ethics

1. Commitment to values

I think there are three levels of values:

  1. Real values. You treat your values as particular objects in reality.
  2. Subjective values. You care only about things inside of your mind. For example, do you feel good or not?
  3. Semantic values. You care about types of changes (D): how your values change/get changed by reality (Y). Your value can be expressed as a combination of the three components: "a real thing + its meaning + changes".

Example of a semantic value: you care about your friendship with someone. You will try to preserve the friendship. But in a limited way: you're ready that one day the relationship may end naturally (your value may "die" a natural death). Semantic values are temporal and path-dependent. Semantic values are like games embedded in reality: you want to win the game without breaking the rules.

2. Ethics

I think there are three levels of analyzing ethics:

  1. You analyze norms of specific communities and desires of specific people. That's quite easy: you are just learning facts.
  2. You analyze types of norms and desires. You are lost in contradictory implications, interpretations and generalizations of people's values. You have a meta-ethical paralysis.
  3. You study types of changes (D): how norms and desires change/get changed by some other thing Y. D and Y need to be important even outside of (purely) ethical context.

Ethics: tasks and games

For example, Y can be "tasks, games, activities" and D can be "breaking/creating symmetries". You can study how norms and desires affect properties of particular activities.

Let's imagine an Artificial Intelligence or a genie who fulfills our requests (it's a "game" between us). We can analyze how bad actions of the genie can break important symmetries of the game. Let's say we asked it to make us a cup of coffee:

  • If it killed us after making the coffee, we can't continue the game. And we ended up with less than we had before. And we wouldn't make the request if we knew that's gonna happen. And the game can't be "reversed": the players are dead.
  • If it has taken us under mind control, we can't affect the game anymore (and it gained 100% control over the game). If it placed us into a delusion, then the state of the game can be arbitrarily affected (by dissolving the illusion). And depends on perspective.
  • If it made us addicted to coffee, we can't stop or change the game anymore. And the AI/genie drastically changed the nature of the game without our consent. It changed how the "coffee game" relates to all other games, skewed the "hierarchy of games".

Those are all "symmetry breaks". And such symmetry breaks are bad in most of the tasks.

Ethics: Categorical Imperative

With Categorical Imperative, Kant explored a different choice of Y and D. Now Y is "roles of people", "society" and "concepts"; D is "universalization" and "becoming incoherent/coherent" and other things. Examples of Kant's analysis.

Ethics: Preferences

If Y is "preferences" and D is "averaging", we get Preference utilitarianism. (Preferences are important even outside of ethics and "averaging" is important everywhere.) But this idea is too "low-level" to use in analysis of ethics.

However, if Y is "versions of an abstract preference" and D is "splitting a preference into versions" and "averaging", then we get a high-level analog of preference utilitarianism. For example, you can take an abstract value such as Bodily autonomy and try to analyze the entirety of human ethics as an average of versions (specifications) of this abstract value.

Preference utilitarianism reduces ethics to an average of micro-values, the idea above reduces ethics to an average of a macro-value.

Ethics: conclusion

So, what's the point of the 3rd level of analyzing ethics? The point is to find objective sub-structures in ethics where you can apply deduction to exclude the most "obviously awful" and "maximally controversial and irreversible" actions. The point is to "derive" ethics from much more broad topics, such as "meaningful games" and "meaningful tasks" and "coherence of concepts".

I think:

  • Moral philosophers and Alignment researches are ignoring the 3rd level. People are severely underestimating how much they know about ethics.
  • Acknowledging the 3rd level doesn't immediately solve Alignment, but it can "solve" ethics or the discourse around ethics. Empirically: just study properties of tasks and games and concepts!
  • Eliezer Yudkowsky has limited 3rd level understanding of meta-ethics ("Abstracted Idealized Dynamics", "Morality as Fixed Computation", "The Bedrock of Fairness") but misses that he could make his idea more broad.
  • Particularism (in ethics and reasoning in general) could lead to the 3rd level understanding of ethics.

Exploring perception

1. Properties

There are three levels of looking at properties of objects:

  1. Inherent properties. You treat objects as having more or less inherent properties. E.g. "this person is inherently smart"
  2. Meta-properties. You treat any property as universal. E.g. "anyone is smart under some definition of smartness"
  3. Semantic properties. You treat properties only as relatively attached to objects. You focus on types of changes (D): how properties and their interpretations change/get changed by some other thing Y. You "reduce" properties to D and Y. E.g. "anyone can be a genius or a fool under certain important conditions" or "everyone is smart, but in a unique and important way"

2. Commitment to experiences and knowledge

I think there are three levels of commitment to experiences:

  1. You're interested in particular experiences.
  2. You want to explore all possible experiences.
  3. You're interested in types of changes (D): how your experience changes/get changed by some other thing Y. D and Y need to be important even outside of experience.

So, on the 3rd level you care about interesting ways (D) in which experiences correspond to reality (Y).

3. Experience and morality

I think there are three levels of investigating the connection between experience and morality:

  1. You study how experience causes us to do good or bad things.
  2. You study all the different experiences "goodness" and "badness" causes in us.
  3. You study types of changes (D): how your experience changes/get changed by some other thing Y. D and Y need to be important even outside of experience. But related to morality anyway.

For example, Y can be "[basic] properties of concepts" and D can be "matches / mismatches [between concepts and actions towards them]". You can study how experience affects properties of concepts which in turn bias actions. An example of such analysis: "loving a sentient being feels fundamentally different from eating a sandwich. food taste is something short and intense, but love can be eternal and calm. this difference helps to not treat other sentient beings as something disposable"

I think the existence of the 3rd level isn't acknowledged much. Most versions of moral sentimentalism are 2nd level at best. Epistemic Sentimentalism can be 3rd level in the best case.

Exploring cognition

1. Patterns

I think there are three levels of [studying] patterns:

  1. You study particular patterns (X). You treat patterns as objective configurations in reality.
  2. You study all possible patterns. You treat patterns as subjective qualities of information, because most patterns are fake.
  3. You study types of changes (D): how patterns change/get changed by some other thing Y. D and Y need to be important even outside of (explicit) pattern analysis. You treat a pattern as a combination of the three components: "X + Y + D".

For example, Y can be "pieces of information" or "contexts": you can study how patterns get discarded or redefined (D) when new information gets revealed/new contexts get considered.

You can study patterns which are "objective", but exist only in a limited context. For example, think about your friend's bright personality (personality = a pattern). It's an "objective" pattern, and yet it exists only in a limited context: the pattern would dissolve if you compared your friend to all possible people. Or if you saw your friend in all possible situations they could end up in. Your friend's personality has some basis in reality (X), has a limited domain of existence (Y) and the potential for change (D).

2. Patterns and causality

I think there are three levels in the relationship between patterns and causality. I'm going to give examples about visual patterns:

  1. You learn which patterns are impossible due to local causal processes. For example: "I'm unlikely to see a big tower made of eggs standing on top of each other". It's just not a stable situation due to very familiar laws of physics.
  2. You learn statistical patterns (correlations) which can have almost nothing to do with causality. For example: "people like to wear grey shirts".
  3. You learn types of changes (D): how patterns change/get changed by some other thing Y. D and Y need to be important even outside of (explicit) pattern analysis. And related to causality.

Y can be "basic properties of images" and "basic properties of patterns"; D can be "sharing properties" and "keeping the complexity the same". In simpler words:

On the 3rd level you learn patterns which have strong connections to other patterns and basic properties of images. You could say such patterns are created/prevented by "global" causal processes. For example: "I'm unlikely to see a place fully filled with dogs. dogs are not people or birds or insects, they don't create such crowds or hordes". This is very abstract, connects to other patterns and basic properties of images.

Causality: implications for Machine Learning

I think...

  • It's likely that Machine Learning models don't learn 3rd level patterns as well as they could, as sharply as they could.
  • Machine Learning models should be 100% able to learn 3rd level patterns. It shouldn't require any specific data.
  • Learning/comparing level 3 patterns is interesting enough on its own. It could be its own area of research. But we don't apply statistics/Machine Learning to try mining those patterns. This may be a missed opportunity for humans.

3. Cognitive processes

Suppose you want to study different cognitive processes, skills, types of knowledge. There are three levels:

  1. You study particular cognitive processes.
  2. You study types (qualities) of cognitive processes. And types of types (classifications).
  3. You study types of changes (D): how cognitive processes change/get changed by some other thing Y. D and Y need to be important even without the context of cognitive processes.

For example, Y can be "fundamental[4] configurations / fundamental objects" and D can be "finding a fundamental configuration/object in a given domain". You can "reduce" different cognitive process to those Y and D: (names of the processes below shouldn't be taken 100% literally)

  • Causal reasoning learns fundamental configurations of fundamental objects in the real world. So you can learn stuff like "this abstract rule applies to most objects in the world".
  • Symbolic reasoning learns fundamental configurations of fundamental objects in your "concept space". So you can learn stuff like ""concept A containing concept B" is an important pattern" (see set relations).
  • Correlational reasoning learns specific configurations of specific objects.
  • Mathematical reasoning learns specific configurations of fundamental objects. So you can build arbitrary structures with abstract building blocks.
  • Self-aware reasoning can transform fundamental objects into specific objects. So you can think thoughts like, for example, "maybe I'm just a random person with random opinions" (you can consider your perspective as non-fundamental) or "maybe the reality is not what it seems" (you can consider your basic knowledge about reality as non-fundamental).

I know, this looks "funny", but I think all this could be easily enough formalized. Isn't that a natural way to study types of reasoning? Just ask what knowledge a certain type of reasoning learns!

Exploring theories

2/3 of this part is just an overview of popular mathematical and physical ideas. The purpose of this part is to train you to look at things in terms of Y and D.

1. Science

I think there are three ways of doing science:

  1. You predict a specific phenomenon.
  2. You study types of phenomena. (qualities of phenomena)
  3. You study types of changes (D): how the phenomenon changes/get changed by some other thing Y. D and Y need to be important even outside of this phenomenon.

Imagine you want to explain combustion (why/how things burn):

  1. You try to predict combustion. This doesn't work, because you already know "everything" about burning and there are many possible theories. You end up making things up because there's not enough new data.
  2. You try to compare combustion to other phenomena. You end up fantasizing about imaginary qualities of the phenomenon. At this level you get something like theories of "classical elements" (fantasies about superficial similarities).
  3. You find or postulate a new thing (Y) which affects/gets affected (D) by combustion. Y and D need to be important in many other phenomena. If Y is "types of matter" and D is "releasing / absorbing", this gives you Phlogiston theory. If Y is "any matter" and D is "conservation of mass" and "any transformations of matter", you get Lavoisier's theory. If Y is "small pieces of matter (atoms)" and D is "atoms hitting each other", you get Kinetic theory of gases.

So, I think phlogiston theory was a step in the right direction, but it failed because the choice of Y and D wasn't abstract enough.

I think most significant scientific breakthroughs require level 3 ideas. Partially "by definition": if a breakthrough is not "level 3", then it means it's contained in a (very) specific part of reality.

2. Math

I think there are three ways of doing math:

  1. You explore specific mathematical structures.
  2. You explore types of mathematical structures. And types of types. And typologies. At this level you may get something like Category theory.
  3. You study types of changes (D): how equations change/get changed by some other thing Y. D and Y need to be important even outside of (explicit) math.

Mathematico-philosophical insights

Let's look at math through the lens of the 3rd level:

All concepts above are "3rd level". But we can classify them, creating new three levels of exploration (yes, this is recursion!). Let's do this. I think there are three levels of mathematico-philosophical concepts:

  1. Concepts that change the properties of things we count. (e.g. topology, fractals, graph theory)
  2. Concepts that change the meaning of counting. (e.g. probability, computation, utility, sets, group theory, Gödel's incompleteness theorems and Tarski's undefinability theorem)
  3. Concepts that change the essence of counting. (e.g. Calculus, vectors, probability, actual infinity, fractal dimensions)

So, Calculus is really "the king of kings" and "the insight of insights". 3rd level of the 3rd level.

3. Physico-philosophical insights

I would classify physico-philosophical concepts as follows:

  1. Concepts that change the way movement affects itself. E.g. Net force, Wave mechanics, Huygens–Fresnel principle
  2. Concepts that change the "meaning" of movement. E.g. the idea of reference frames (principles of relativity), curved spacetime (General Relativity), the idea of "physical fields" (classical electromagnetism), conservation laws and symmetries, predictability of physical systems.
  3. Concepts that change the "essence" of movement, the way movement relates to basic logical categories. E.g. properties of physical laws and theories (Complementarity; AdS/CFT correspondence), the beginning/existence of movement (cosmogony, "why is there something rather than nothing?", Mathematical universe hypothesis), the relationship between movement and infinity (Supertasks) and computation/complexity, the way "possibility" spreads/gets created (Quantum mechanics, Anthropic principle), the way "relativity" gets created (Mach's principle), the absolute mismatch between perception and the true nature of reality (General Relativity, Quantum Mechanics), the nature of qualia and consciousness (Hard problem of consciousness), the possibility of Theory of everything and the question "how far can you take [ontological] reductionism?", the nature of causality and determinism, the existence of space and time and matter and their most basic properties, interpretation of physical theories (interpretations of quantum mechanics).

Exploring meta ideas

To define "meta ideas" we need to think about many pairs of "Y, D" simultaneously. This is the most speculative part of the post. Remember, you can treat those speculations simply as sci-fi ideas.

Each pair of abstract concepts (Y, D) defines a "language"[5] for describing reality. And there's a meta-language which connects all those languages. Or rather there's many meta-languages. Each meta-language can be described by a pair of abstract concepts too (Y, D).

I think the idea of "meta-languages" can be used to analyze:

  • Consciousness. You can say that consciousness is "made of" multiple abstract interacting languages. On one hand it's just a trivial description of consciousness, on another hand it might have deeper implications.
  • Qualia. You can say that qualia is "made of" multiple abstract interacting languages. On one hand this is a trivial idea ("qualia is the sum of your associations"), on another hand this formulation adds important specific details.
  • The ontology of reality. You can argue that our ways to describe reality ("physical things" vs. purely mathematical concepts, subjective experience vs. physical world, high-level patterns vs. complete philosophical reductionism, physical theory vs. philosophical ontology) all conflict with each other and lead to paradoxes when taken to the extreme, but can't exist without each other. Maybe they are all intertwined?
  • Meta-ethics. You can argue that concepts like "goodness" and "justice" can't be reduced to any single type of definition. So, you can try to reduce them to a synthesis of many abstract languages. See G. E. Moore ideas about indefinability: the naturalistic fallacy, the open-question argument.

According to the framework, ideas about "meta-languages" define the limit of conceivable ideas.

If you think about it, it's actually a quite trivial statement: "meta-models" (consisting of many normal models) is the limit of conceivable models. Your entire conscious mind is such "meta-model". If no model works for describing something, then a "meta-model" is your last resort. On one hand "meta-models" is a very trivial idea[6], on another hand nobody ever cared to explore the full potential of the idea.

Nature of percepts

I talked about qualia in general. Now I just want to throw out my idea about the nature of particular percepts.

There are theories and concepts which link percepts to "possible actions" and "intentions": see Affordance. I like such ideas, because I like to think about types of actions.

So I have a variation of this idea: I think that any percept gets created by an abstract dynamic (Y, D) or many abstract dynamics. Any (important) percept corresponds to a unique dynamic. I think abstract dynamics bind concepts. But this is an "unfinished" idea.

A criticism of Bayesian reasoning?

I haven't got much here, but I thought I'd share. You can use my framework to come up with a criticism of Bayesianism. It doesn't have to be a criticism, though.

You can classify types of reasoning as follows:

  1. You update your beliefs only after receiving very impactful/significant new information (X).
  2. You update your beliefs on every single bit of any new information (X).
  3. You update your beliefs as much as new information (X) changes (D) some other thing Y.

So, you update only on particular type of information (1st level) or on any type of information (2nd level) or on a changing type of information (3rd level).

According to the classification, Bayesianism is only 2nd level and there should exist some other type of reasoning. Does it actually exist and does it matter? It's up to your intuitions.

I think that 2nd type of reasoning is infeasible for humans and there are some subtle hints that humans don't think like this: see the Raven paradox, Sunrise problem, reference class problem and Pascal's Mugging (which may require a non-Bayesian update for a solution). The first two "paradoxes" have a Bayesian solution, but they can make you question inductive reasoning anyway.

For David Deutsch Y is "memes", "good explanations" and "invariants": new information is meaningful only to the extent that it leads to new hard-to-vary memes ("good explanations"). So, in a way observing a new sunrise gives absolutely zero new information about the sun, because we already have a good explanation of the way the sun works. You can read this review/summary of "The Beginning of Infinity" to get more familiar with David Deutsch's ideas.


This part of the post discusses some questions about my framework. I mostly answer "from inside the framework" and my own opinion (which may be unjustified to you).

Q: Isn't the choice of possible "Y" and "D" too large?


I think in practice it doesn't matter:

  • Every 3rd level idea is important and valuable.
  • In the current culture, 3rd level ideas are rare.
  • 3rd level ideas are very original and general (they always cover multiple domains of knowledge, by design). So, the amount of 3rd level ideas is limited.

Q: Why should we explore?

Q: Why should I explore ideas if I think I already discovered the best idea? Do you think that exploration is an unconditional virtue? A:

I just think that:

  • You can't guess the correct idea without exploring other ideas.
  • Any topic has at least 2 important ideas in it.
  • The most important and probable ideas are on the 3rd level.
  • If you have good understanding of a topic and a good idea in it, then you should "automatically" understand many ideas in the topic.

So, I think that "level of exploration" can be an important measure of intelligence/progress in a given topic even if the best idea is known beforehand.

  • All that said... yes, I think intellectual exploration is a freaking unconditional virtue! C'mon, how can you not value originality? This value is also crucial for communication between people or between your mind and reality. How can I convince you of something if you're not even interested in listening to me? How can you discover something if you're not interested in looking? What's the joy of reality if our egos matter more than exploration?

Q: What about modeling and predicting?

Q: It's extremely important for intelligence to create models and specific predictions of reality. This seems completely missing from your idea. What's the deal? A:

At every step of the thinking process you have a choice "what should I think about next?", the post describes how to make the correct choice. The post doesn't mention specific models with specific predictions: that's because I think good models and good predictions follow from making the correct choices. I think it's an unexpected/counterintuitive/interesting property of my idea that it doesn't explicitly discuss predicting reality.

I think "three levels of exploration" can be turned into a probabilistic inference rule. But it's a very exotic one.

Q: Do you overestimate the importance of the 3rd level?


To answer this I'll need to introduce another way to describe the "three levels of exploration". It goes like this:

  1. Object-level reasoning.
  2. Meta-level reasoning. (see Object level vs. Meta level)
  3. "Dynamical" reasoning: it studies how object level and meta level can bleed into one another.

First, I think that all types of reasoning are important.

Second, I think that "dynamical" reasoning is underestimated. Partially because it can be confused with pure meta-level or pure object-level reasoning.

Third, I think that we need to "try" overestimating the 3rd level. As a thought experiment, in order to explore the 3rd level, we need to imagine a world where 3rd level is the most important one.

So I think "yes", it's likely that I'm overestimating the 3rd level, especially in this post.


Thank you for reading this.

If you want to discuss the idea, please focus on the idea itself and its particular applications. Or on exploring particular topics!

I want to thank JustisMills for feedback and a couple of specific ideas.

  1. ^

    I don't agree too much with Scott Alexander's specific framing, by the way.

  2. ^

    What do I mean by "nobody had done it"? I wear my "confidence hat" saying it. Not counting the confidence hat, my reasoning goes somewhat as follows:

    1) Rationalists are rare people who worry about not having a (general) theory of reasoning. Yet they don't study informal argumentation much. Scott Alexander seems more interested in informal arguments and intentions behind them (e.g. Conflict Vs. Mistake, Noncentral Fallacy, Weak Men), but he doesn't try to build any theory of informal argumentation.

    2) Progressive left are rare people who try to deconstruct almost any concept (Social constructionism). Yet they are not too focused on deconstructing argumentation itself.

    3) Philosophers are rare people who question everything... but they are not very focused on questioning "philosophical debates" in general.

    So, people thinking about fundamental stuff related to argumentation is a priori rare, but people focusing on argumentation itself (in a fundamental way) is even more rare. If they exist, they are pretty removed from any popular discourse.

  3. ^

    This doesn't have to be a criticism. Even though I think that community as a whole should explore 3rd level more.

  4. ^

    "fundamental" here means "VERY widespread in a certain domain"

  5. ^

    Instead of "language" I could use the word "model". But I wanted to highlight that those "models" don't have to be formal in any way.

  6. ^

     For example, we have a "meta-model" of physics: a combination of two 'wrong' theories, General Relativity and Quantum Mechanics.

New Comment
3 comments, sorted by Click to highlight new comments since:

I think playing around with ideas like this in detail is underrated. Noting that I'm only criticizing the strong version because I like it overall. Both 'study' and your named variables here are serving as aether variables. If you have a flexible enough representation then you can use it to represent anything, unfortunately you've also gutted it of predictive power (vs post hoc explanation).

Secondly and more constructively: I'm reminded of Donell Meadow's Leverage Points.

If you have a flexible enough representation then you can use it to represent anything, unfortunately you've also gutted it of predictive power (vs post hoc explanation).

I think this can be wrong:

  1. "Y" and "D" are not empty symbols, they come with an objective enough metric (the metric of "general importance"). So, it's like saying that "A" and "B" in the Bayes' theorem are empty symbols without predictive power. And I believe the analogy with Bayes' theorem is not accidental, by the way, because I think you could turn my idea into a probabilistic inference rule.
  2. If my method can't help to predict good ideas, it still can have predictive power if it evaluates good ideas correctly (before they get universally recognized as good). Not every important idea is immediately recognized as important.

Can you expand on the connection with Leverage Points? Seems like 12 Leverage Points is an extremely specific and complicated idea (doesn't mean it can't be good in its own field, though).

I see the 12 points as possible trailheads for analyzing D when the person is new to the type of analysis and needs examples to chain off of.