Rejected for the following reason(s):
- No LLM generated, assisted/co-written, or edited work.
- LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar.
Read full explanation
Rejected for the following reason(s):
The text is translated from Russian into English, there may be mistakes.
Hypothesis, requires verification
Introduction
At some point in life, I became very interested in chains of explanations. I became interested in conducting a written experiment, and I was surprisingly surprised to notice that sooner or later I get stuck at level 4-5. And not because I was overloaded, on the contrary, I could continue further, but then I would move from one system to another. Suppose explaining an apple only by its properties, answering the question “why do I eat an apple?”, which was built based on the process “I eat an apple”, I stopped at level 4-5, again and again, but I could continue further, moving from the properties of the apple into completely different systems of explanations, biology or evolution for example, this seemed not entirely correct to me, because I wanted to delve into the properties of the apple, not into something else. And here an important clarification, when I conducted this experiment, I had a specific question, namely, is it possible to understand exactly how the chains end, why exactly this way, and whether it can be measured.
To formalize the observations, I introduced the concept of a step of explanation through the operation Explain_A(). And additionally k - this is exactly the transition to a new level.
I will give you an explanation so that it is extremely clear what I mean.
Explain_A(L) - this is a step of explanation performed by agent A (person, program, researcher). As input, a statement or fact is given, which from this moment will be L, and the output is the next level of explanation L_next
To give more understanding of why I introduce explain. I do this for: fixing each level of explanation and observing the chain of these levels of explanation, ensuring at least minimal reproducibility, so that different researchers could apply the same rules, and for the possibility of measuring the depth of chains and classifying the finals of the explanation.
Now the steps themselves and what is considered a correct step of explanation and incorrect.
Correct steps:
A step is counted if three conditions are met:
1. Q answers the question “why P” and does not just describe P otherwise.
2. Q introduces a new explanatory element that was not in P.
3.Q explains the same phenomenon as P, does not switch to another question.
What is not considered a step:
1. Rephrasing - the same words in different words.
2. Parallel theory - another explanation of the same phenomenon, not the next level.
3. Change of question - explanation of another phenomenon under the same pretext.
At this stage I want to initiate you into the fact that levels also differ in their essence. These are conceptual and structural. And it is important to distinguish them. Therefore now I will give you information so that you can do this successfully and also successfully count them.
Protocol for the conceptual level - a new step is counted if a new term or principle appears that did not exist at the previous level and which performs an explanatory function.
Protocol for the structural level - a structural transition is counted if at least one of two conditions is met.
Both conditions are checked strictly relative to the previous area of explanation - not relative to the agent’s knowledge as a whole.
1. First condition - the new area introduces basic concepts that are not defined through the concepts of the previous area. Check: is it possible to give a full definition of the new concept using only the terms of the previous area? If not - the transition is counted.
2. Second condition - the new area has laws that are not derived from the laws of the previous area. Check: does the new law logically follow from the laws of the previous area? If not - the transition is counted.
What is not considered a structural transition - deepening within the same area through a more detailed mechanism, where all new concepts and laws are defined through the concepts and laws of the previous level strictly relative to it.
I will make an analogy, for clarity. Conceptual level is floors inside a house, structural level is the houses themselves.
Comparison of k between different agents is possible provided Explain_A() is documented - that is, the rules by which the steps were counted are explicitly described. For HBME, this is important, for reproducibility.
In general, we move on to the essence of the hypothesis
Any “why/what for” chain ends at a finite level. The final is described by four independent statuses.
1. Logical status - exclusive.
Should be checked first
Logical dead end - there is a circle or repetition: Q explains P through P. If yes - the final is fixed, further analysis is not needed.
2.Structural status - is the statement an axiom of the system?
Axiom - a statement is accepted within the system without proof as a basis. Check: is it accepted as a basis of the system?
Non-axiom - the statement is not a basis of the system.
3. Epistemic status - is there an explanation
Epistemic dead end - explanation is absent but theoretically possible. Check: is there a fundamental structural prohibition on explanation? If not - epistemic dead end.
4. Agent status - is the agent limited
Pragmatic dead end - the agent has reached the limit of their CA. Explanation is possible for a more powerful agent. Check: could an agent with greater CA continue the chain?
It is also worth noting that finals can be both single (only one final) and combined (two or more finals). Considering the statuses of the final in this way is more correct and gives a much broader picture. Than if there could always be only one final, although even when there are several finals, I tend now to assume that there will be one “main” one. But this assumption is intuition, which, as practice shows, can be very deceptive.
And there is a serious reason to add that transitions into a metasystem add a level to the structural k. And this phenomenon itself is just a restart of the chain, after which another structural k or final statuses, which were described above, may manifest.
At this stage, it is important to give you an example so that you better understand what I am talking about. Questions in the example will not be written. But each new level is formulated from the question “why?”, which relates to the previous level. I think this is already obvious, but it seemed important to clarify.
Example - disease
L0 - person got sick. Biology.
L1 - because bacteria entered the body. Biology, new element: external pathogen. Counted conceptually. Structural transition: no - the concept of pathogen is defined through biological terms.
L2 - because the bacteria release toxins. Biology, new element: pathogen impact mechanism. Counted conceptually. Structural transition: no.
L3 - because toxins disrupt cell mechanisms. Biology, new element: cellular level damage. Counted conceptually. Structural transition: no.
L4 - because these are chemical reactions at the molecular level. Chemistry - the concept of chemical reaction is not defined through biological terms, chemistry laws are not derived from biological laws. Counted conceptually and structurally +1.
L5 - because these are the laws of chemistry. Chemistry, new element: generalizing principle. Counted conceptually. Structural transition: no - staying in chemistry.
L6 - because these are quantum interactions. Quantum physics - the uncertainty principle is not derived from the laws of chemistry, wave functions are not defined through chemical terms. Counted conceptually and structurally +1.
L7 - because quantum mechanics describes particle behavior at this level through a separate mathematical apparatus. Quantum physics - the mathematical apparatus of quantum mechanics is not derived from classical physics. Counted structurally +1.
L8 - nature of quantum effects. Final - Axiom + Epistemic dead end. Quantum laws are used as the basis of physical theory, but physicists are simultaneously looking for a deeper explanation.
Conceptual k = 6. Structural k = 3.
The example is quite illustrative, and you can independently conduct similar ones and also independently calculate structural and conceptual levels. I can also give a timely forecast that going beyond k=6 at the structural level and k=15 at the conceptual level, you will not succeed. But these figures follow from my experiments with a small empirical base. Your numbers may differ significantly from mine, but I would be glad to see what the indicators will be.
I also wanted to formally record my observations.
1. Chain of explanations:
L₀ = initial fact
Lₙ₊₁ = Explain_A(Lₙ)
Each transition introduces a new explanatory element or moves to a new area of knowledge.
2. Decline of practical value:
V(n) = V₀ · rⁿ, r < 1
Where V(n) — practical value of explanation at level n, V₀ - value of the first explanation, r - decay coefficient less than one. Formula is heuristic - describes the observed pattern.
2.Agent depth limit:
k(A) ≤ log(CA / Kmin)
Where CA - informational capacity of the agent, Kmin - minimum algorithmic complexity of one explanatory step. Formula is heuristic - exact measurement of CA remains an open question. Pragmatic dead end occurs when n → k(A). For a more powerful agent, the same point may not be a dead end but an intermediate step.
Constraints of the quantitative CA model: until the exact dependence k(A) = f(CA, Kmin) is known, any quantitative estimates are a hypothesis for future research. Reasoning about critical CA necessary for proving specific theories remains speculative and is not part of strict proof of the hypothesis.
4. Probability of a dead-end type final at the agent’s limit:
P_dead_end(n) → 1 when n → k(A)
Applies only to systems where the agent is part of the explained phenomenon - consciousness, language, formal systems.
Now, I want to talk about domains and finals and express some of my observations, which are based on several texts, but about the texts later.
Domains and finals
A domain defines the probabilistic distribution of finals - does not determine a specific outcome.
1. Mathematics - structural k 0-3, axiom dominates. Closed formal system, knows where it ends.
2. Physics - structural k 1-4, axiom and transition to mathematics more often, epistemic dead end possible.
3. Ethics - structural k 1-6, all finals equally likely. Open normative system, does not know where it ends.
I want to add that domain stability in finals is a measure of its axiomatic closure.
Now, as it seems to me, it is time to talk about two statements I expressed.
H1 - structural limit: for any chain of explanations there exists a finite k. At the structural level - almost formal: the number of disciplinary clusters is finite, the chain must stop. At the conceptual - empirical: the agent hits the limit of its world model.
H2 - convergence: many natural-science chains demonstrate a tendency to reduction to more fundamental theories. This is probabilistic observation, I do not claim a strict law. Mathematics closes in ZFC axioms, ethics goes into social sciences or values.
When writing the hypothesis, a reasonable question arose, based on the fact that before me, much of what I wrote was formulated. And the essence of the question is “wasn’t this already done?” and I wanted to answer it in advance, so as to avoid some misunderstandings that could arise during reading, but I may still miss some works.
1. Kant fixed the limits of knowledge - what we can know. HBME asks how deep we can explain and how exactly the explanation ends. Different questions.
2. Gödel proved that inside a formal system there are undecidable statements. This is an inspiring analogy for HBME - not a proof. Different mechanisms, similar nature.
3. Turing showed that a system cannot fully check itself from within. Another inspiring analogy - especially for cases of self-reference. Not a proof, but structurally close.
4. Simon and Kahneman described the limits of decision-making. HBME describes the limits of explanation. They intersect in cognitive constraints, but diverge in subject.
Meta-consequences of HBME are partly analogous to the limitations of formal systems in Gödel’s theorems and the halting problem: there are structures that the agent cannot fully explain from within. Unlike these strict mathematical results, HBME remains an epistemic model applicable to explanatory chains in any knowledge domain.
No one before classified explanation finals operationally with verifiable criteria. No one introduced two levels of depth measurement. No one linked the depth of explanation with the informational capacity of the agent through a formula.
I am also familiar with the Hard Problem of consciousness.
And I became interested in understanding it through the prism of HBME. And here is the consequence I would like to express.
Hard problem of consciousness - why physical processes in the brain generate subjective experience. Why pain is painful and not just a signal in neurons.
The chain of explanations of consciousness closes on a pragmatic dead end transitioning into a logical one - the system explaining consciousness itself is part of what it explains. It cannot go beyond itself.
Nagel in 1974 felt that an external perspective is needed. HBME explains the structural reason - and formulates the condition of a solution: an external agent with sufficient CA to describe consciousness as an object from the outside. This is a consequence of the hypothesis, not proof.
HBME emphasizes that the limits of explanations depend not only on the structure of the phenomenon but also on the cognitive and informational capabilities of the agent. Thus, some explanations may be fundamentally unreachable for human-level CA although formally possible for a more powerful agent. And I will note again, that this is a possible consequence within HBME and a very interesting thought to consider, but not a proof.
During the consideration of the hypothesis and examining options for its proof, I came across an interesting thought where HBME applies to itself.
And here is the thought itself.
The attempt to fully prove HBME itself is an explanatory chain. Since each chain is subject to the depth limit k(A) of the agent, the proof may encounter a pragmatic dead end for a given agent. This does not confirm the hypothesis but is compatible with its prediction of the existence of explanation limits.
HBME describes the structure of explanatory chains but does not claim to explain the origin of the structure itself. Attempting to fully explain HBME can form a meta-chain which is also subject to HBME limitations. And I again note, self-application is not proof, but sufficiently interesting observation, to show it to you.
If we talk about texts, which I mentioned, now it is the time to tell them better, because they give a very small empirical basis and make the pattern obvious, still, it is better than nothing, but it cannot serve as proof, I will note again.
All texts were checked by the structural criterion without knowledge of the hypothesis. Annotation was carried out by the researcher - this is a limitation requiring independent verification.
D’Alembert, Encyclopédie, philosophy - k=4, Transition → Epistemic dead end.
Darwin, Origin of Species, biology - k=3, Transition → Epistemic dead end.
OpenStax, physics textbook - k=2, Axiom.
Dawkins, The Blind Watchmaker, biology - k=3, Transition → Epistemic dead end.
Hempel, DN model, philosophy of science - k=3, Transition → Epistemic dead end.
All five texts fell in the range 2-4. These are exploratory data, which in my opinion, are sufficient to reveal the pattern, but the volume of empirical data is small and far from proof.
Speaking still about domains, falsifiable predictions suggest themselves that:
1. Mathematics will give an axiom more often than other domains for any agent.
2. Ethics will give a greater spread of finals than physics or mathematics.
3. Structural k will correlate with the number of disciplinary transitions in the chain.
4. Physics will give a transition to mathematics as a dominant final.
5. Long conceptual chains (k > 8) will be decomposable into several structural restarts.
Also, I have not given time to explain what the formula about decreasing value means. I will correct.
For many people, in my observations, it is characteristic to stop at conceptual level k 1. Suppose, again, the example with an apple. The process “I eat an apple” is already understandable, but if a person wants to ask “why do I eat an apple?”, he will stop at the answer to this question. Because further information he will get will be redundant and practically useless. This confirms the principle of cognitive economy. But I, would not be me, if I did not want to look wider at this process. And I began to look at the history of humanity, or rather, its discoveries. The initial discoveries of humans, like fire or the wheel, were very important and had and have now, very extensive practical applications. Discoveries today, have an interesting property, where practical application is significantly lower, but the depth of these discoveries amazes the mind. And here I can distinguish that discoveries themselves simultaneously have two parameters. Practicality and depth. Where practicality - the breadth of actual application, and depth - answers to questions about the structure of the phenomenon. Suppose the example with fire, when a person tamed it, no one knew exactly how it works, but its practical application was extensive, and the depth of knowledge about the phenomenon was insignificant. The wheel for example, once it appeared, it was simply a fact “to spin”, practical application of this phenomenon was large-scale, but again depth was insignificant. No one then had any idea how fire works and by what principles the wheel spins at all. From this I can formulate one brief sentence, that practical application often outpaces the explanatory chain. Ending this part of the hypothesis, all human discoveries can be described as a chain of explanations, where initially there was a phenomenon, then conceptual and structural levels of k began to be applied to it, where each new level clarified the phenomenon, but carried less and less practical benefit. It is important to clarify that about humanity, this is exceptional observation and intuition, and I would be glad if someone clarifies this and finds out at what level of conceptual k and structural k humanity is in different fields and the average value of this k for conceptual and structural level separately. And it would also be interesting to know what practical application our levels have now and how deep they are. Again, I do not claim that my intuition and observations are proof. And I clarify, that practicality is an informal analogue of V(n), and depth is k. I made the clarification so that this excerpt is not a separate philosophical thought, but still relates to HBME.
It is time to summarize and talk about the questions and vulnerabilities that arose for me and possibly for you.
Vulnerabilities:
1. Explain_A() varies between agents - comparison of k requires documentation of criteria.
2. H2 is controversial - not all domains converge to more fundamental theories.
3. Empirics on humans are absent - five texts is not statistics.
4. CA is not measured quantitatively - the formula k(A) is heuristic.
5. Text annotation was carried out without blind control - possible confirmation bias.
Open questions
1. How exactly to calculate CA and Kmin for a real agent?
2. Is it possible to formalize the distinction between epistemic and pragmatic dead ends strictly?
3. How does HBME work for collective knowledge - science as an institution, not a single agent?
4. Is the k limit fixed for an agent or does it change depending on the domain?
5. Is it possible to empirically check the distribution of finals on a large sample of texts?
6. Is there a minimal cognitive capacity CA necessary to prove specific theories - and how to measure it?