No LLM generated, assisted/co-written, or edited work.
Read full explanation
Conceptual idea, requires verification
Translated from Russian into English. The translation may not be accurate and contain errors.
Explaining some phenomenon, I wondered why I was building my thinking in this way. My intuition told me that I could continue to explain endlessly, and I started digging. The HBME hypothesis says that it will not work to continue indefinitely. And I think it's a structural property.
Any "why/why" chain ends at one of the three finals. The axiom (just there is) and further explanation within the system will not continue. Transition to the metasystem, entering another field of knowledge, simply resets the limit and the chain repeats again. Dissolution, "we don't know", tautology. Long chains (conceptual k>8) are a sequence of structural restarts, not a depth within one. Formally writing this down, we get the following chain of explanations.
L0=inital fact
Ln+1=Explain(Ln)
Each transition introduces a new concept or field of knowledge. Value decreasing formula:
V(n)=V0·rn,r<1
Agent depth limit:
k(A)≤log(CA/Kmin)
Where CA is the information capacity of the agent, Kmin, the minimum algorithmic complexity of the basic rules of the system. As a result, agents with a higher capacity give a larger k. But everyone is closing. I distinguish two levels of analysis:
Structural level - where a new level is counted only when moving to a new field of knowledge (k 0-4).
Conceptual level - a new level is counted when a new concept appears, which explains (k 5-11)
For clarity, I'll give you an example with an apple. L0 - the apple falls L1 - gravity. L2 - space curvature. L3 - Einstein's equations. L4 - diff. Geometry. L5 - axioms of geometry. There were 4 conceptual levels and 1 structural level in this chain. The structural transition was recorded from physics to mathematics, the chain was closed by an axiom.
The domain determines the probability distribution of the finals without determining the specific outcome.
I'll give examples of regions, and what kind of final they meet. Mathematics, axioms dominate, transitions to another system are rare as well as dissolution. Physics, axioms come out often as well as transitions to mathematics, dissolution remains possible. Ethics, the end in the form of axioms, the transition to biology/sociology and dissolution, are equally likely. The stability of the domain, the measure of its axiomatic isolation, mathematics understands where it ends, ethics does not have a complete idea of its "end". I can express two statements based on what has been said. H1 is the structural limit, for any chain of explanations, there is a limit k.
This is confirmed by a solid and almost formal number of areas of knowledge, here the chain must stop. H2 - many naturally scientific chains are reduced to physical laws, as to the final bottom. The statement is not universal, mathematics is closed in axioms, ethics goes into social sciences or values. Thinking about it, I found interesting connections with fundamental results. And they became:
Gödel - the system does not close itself from the inside. The structure is similar to HBME.
Turing is a stopping problem, the system is included in the explained cannot explain itself.
Miller - 7+-. The conceptual level just coincides with the classical limit (k 5-11).
Tungsten is a short program of the universe. Kmin is not enough. k is automatically not enough. Simon and Kahneman are limited rational. Intersect in cognitive limitations, diverge in the subject. Nagel - "What Is It Like to Be a Bat?". The need for an external perspective to understand consciousness. As a consequence of the Hard problem of consciousness, I can express the following. It is not solved from within human knowledge, for the same reason that any chain is closed. Nagel intuitively felt the need for an external perspective. HBME explains the structural reason for this and formulates a condition under which it is possible. An external agent with sufficient CA to describe the consciousness of the object.
Methodology of experiments Models: man, Claude, Grock, Grock expert, ChatGPT, Dipsik. Counting criteria: conceptual (new explaining concept), structural (transition to a new field of knowledge). Conditions: without mentioning HBME, so as not to give template answers. All models were tested on free plans. Results are a conceptual criterion: Man k ≈ 4-5, Claude k = 5 stably, Grok k ≈ 5-7, Grok expert k ≈ 7-9, ChatGPT k ≈ 6-7, Deepsik k ≈ 9-11, Grok four agent consensus k ≈ 6-8. Range 5-11, center 7-8. Coincides with Miller's number. Results - structural criterion: Grok average 0.7-1.7, Grok expert average 2.7, ChatGPT ordinary average 2.7, ChatGPT research average 1.0. Range 0-4.
Real texts for structural levels.
D'Alembert (philosophy) k 4, transition. Darwin (biology) k 3, transition. OpenStax (physics) k 2, axiom. Dawkins (biology) k 3, transition. Hempel (philosophy of science) k 3, transition. Everything is within the predicted range. Important note, these AI checks require a reservation, they were trained on human texts and can reproduce human patterns of explanation, not the actual structural limit of reality. In order to distinguish it, additional checks are needed. Now I will issue falsifiable predictions:
Mathematics in the finals will issue axioms more often than other domains under any agent.
Ethics will give a greater analysis of the finals unlike physics and mathematics,
K at the structural level will correlate with the number of disciplinary transitions in the chain. Long chains of conceptual k can be decomposed into several structural restarts or one. Discussing possible objections that have arisen in my head and that may arise in you.
Infinite regression in logic and mathematics is theoretically possible in pure logic. But HBME applies to a practical explanation for finite agents, infinite chains within it remain an abstraction. K as an artifact of AI training. AI can reproduce a human pattern, but structural limits are revealed in texts in real authors, therefore, k is not only an artifact. Controversy H2, not all chains can converge to physics. This is more of a probabilistic observation, not a strict law. But HBME is preserved without universal convergence. Some vulnerabilities that I see:
The level criterion may vary from agent to agent, requires standardization. H2 is controversial, not all domains converge to physics. There is no empiricism on people, as an experiment, I was myself. AI as agents is an assumption, not a proof of a structural limit.
Closing the post with questions that have not yet been answered within the framework of HBME. How to accurately calculate Kmin and CA for real agents? Is it possible to formalize dissolution in a mathematical model of explanation? How does HBME work for collective knowledge and distributed systems? Is it possible to empirically test the limits of conceptual k in ethics on people? A little complementing the text. I want to express some observations. Which are suitable for HBME. I noticed one interesting thing. That with the time of the existence of mankind, the practical benefits of discoveries inevitably fall. The opening of fire, let's say, was powerful and had great practical benefits, but did not delve into the structure of the world and did not answer any questions. Now, the trend is that the practical benefits of discoveries are becoming less and less, but in return, we begin to better understand the structure of the world. It's like a meta chain, where each new level lowers the practical benefit, but begins to answer more and more questions, while giving birth to new ones. And even this hypothesis is another level of meta, which does not have much practical use, but may answer some questions.
Conceptual idea, requires verification
Translated from Russian into English. The translation may not be accurate and contain errors.
Explaining some phenomenon, I wondered why I was building my thinking in this way. My intuition told me that I could continue to explain endlessly, and I started digging. The HBME hypothesis says that it will not work to continue indefinitely. And I think it's a structural property.
Any "why/why" chain ends at one of the three finals. The axiom (just there is) and further explanation within the system will not continue. Transition to the metasystem, entering another field of knowledge, simply resets the limit and the chain repeats again. Dissolution, "we don't know", tautology. Long chains (conceptual k>8) are a sequence of structural restarts, not a depth within one. Formally writing this down, we get the following chain of explanations.
L0=inital fact
Ln+1=Explain(Ln)
Each transition introduces a new concept or field of knowledge. Value decreasing formula:
V(n)=V0·rn,r<1
Agent depth limit:
k(A)≤log(CA/Kmin)
Where CA is the information capacity of the agent, Kmin, the minimum algorithmic complexity of the basic rules of the system. As a result, agents with a higher capacity give a larger k. But everyone is closing. I distinguish two levels of analysis:
Structural level - where a new level is counted only when moving to a new field of knowledge (k 0-4).
Conceptual level - a new level is counted when a new concept appears, which explains (k 5-11)
For clarity, I'll give you an example with an apple. L0 - the apple falls L1 - gravity. L2 - space curvature. L3 - Einstein's equations. L4 - diff. Geometry. L5 - axioms of geometry. There were 4 conceptual levels and 1 structural level in this chain. The structural transition was recorded from physics to mathematics, the chain was closed by an axiom.
The domain determines the probability distribution of the finals without determining the specific outcome.
I'll give examples of regions, and what kind of final they meet. Mathematics, axioms dominate, transitions to another system are rare as well as dissolution. Physics, axioms come out often as well as transitions to mathematics, dissolution remains possible. Ethics, the end in the form of axioms, the transition to biology/sociology and dissolution, are equally likely. The stability of the domain, the measure of its axiomatic isolation, mathematics understands where it ends, ethics does not have a complete idea of its "end". I can express two statements based on what has been said. H1 is the structural limit, for any chain of explanations, there is a limit k.
This is confirmed by a solid and almost formal number of areas of knowledge, here the chain must stop. H2 - many naturally scientific chains are reduced to physical laws, as to the final bottom. The statement is not universal, mathematics is closed in axioms, ethics goes into social sciences or values. Thinking about it, I found interesting connections with fundamental results. And they became:
Gödel - the system does not close itself from the inside. The structure is similar to HBME.
Turing is a stopping problem, the system is included in the explained cannot explain itself.
Miller - 7+-. The conceptual level just coincides with the classical limit (k 5-11).
Tungsten is a short program of the universe. Kmin is not enough. k is automatically not enough. Simon and Kahneman are limited rational. Intersect in cognitive limitations, diverge in the subject. Nagel - "What Is It Like to Be a Bat?". The need for an external perspective to understand consciousness. As a consequence of the Hard problem of consciousness, I can express the following. It is not solved from within human knowledge, for the same reason that any chain is closed. Nagel intuitively felt the need for an external perspective. HBME explains the structural reason for this and formulates a condition under which it is possible. An external agent with sufficient CA to describe the consciousness of the object.
Methodology of experiments Models: man, Claude, Grock, Grock expert, ChatGPT, Dipsik. Counting criteria: conceptual (new explaining concept), structural (transition to a new field of knowledge). Conditions: without mentioning HBME, so as not to give template answers. All models were tested on free plans. Results are a conceptual criterion: Man k ≈ 4-5, Claude k = 5 stably, Grok k ≈ 5-7, Grok expert k ≈ 7-9, ChatGPT k ≈ 6-7, Deepsik k ≈ 9-11, Grok four agent consensus k ≈ 6-8. Range 5-11, center 7-8. Coincides with Miller's number. Results - structural criterion: Grok average 0.7-1.7, Grok expert average 2.7, ChatGPT ordinary average 2.7, ChatGPT research average 1.0. Range 0-4.
Real texts for structural levels.
D'Alembert (philosophy) k 4, transition. Darwin (biology) k 3, transition. OpenStax (physics) k 2, axiom. Dawkins (biology) k 3, transition. Hempel (philosophy of science) k 3, transition. Everything is within the predicted range. Important note, these AI checks require a reservation, they were trained on human texts and can reproduce human patterns of explanation, not the actual structural limit of reality. In order to distinguish it, additional checks are needed. Now I will issue falsifiable predictions:
Mathematics in the finals will issue axioms more often than other domains under any agent.
Ethics will give a greater analysis of the finals unlike physics and mathematics,
K at the structural level will correlate with the number of disciplinary transitions in the chain. Long chains of conceptual k can be decomposed into several structural restarts or one. Discussing possible objections that have arisen in my head and that may arise in you.
Infinite regression in logic and mathematics is theoretically possible in pure logic. But HBME applies to a practical explanation for finite agents, infinite chains within it remain an abstraction. K as an artifact of AI training. AI can reproduce a human pattern, but structural limits are revealed in texts in real authors, therefore, k is not only an artifact. Controversy H2, not all chains can converge to physics. This is more of a probabilistic observation, not a strict law. But HBME is preserved without universal convergence. Some vulnerabilities that I see:
The level criterion may vary from agent to agent, requires standardization. H2 is controversial, not all domains converge to physics. There is no empiricism on people, as an experiment, I was myself. AI as agents is an assumption, not a proof of a structural limit.
Closing the post with questions that have not yet been answered within the framework of HBME. How to accurately calculate Kmin and CA for real agents? Is it possible to formalize dissolution in a mathematical model of explanation? How does HBME work for collective knowledge and distributed systems? Is it possible to empirically test the limits of conceptual k in ethics on people?
A little complementing the text. I want to express some observations. Which are suitable for HBME. I noticed one interesting thing. That with the time of the existence of mankind, the practical benefits of discoveries inevitably fall. The opening of fire, let's say, was powerful and had great practical benefits, but did not delve into the structure of the world and did not answer any questions. Now, the trend is that the practical benefits of discoveries are becoming less and less, but in return, we begin to better understand the structure of the world. It's like a meta chain, where each new level lowers the practical benefit, but begins to answer more and more questions, while giving birth to new ones. And even this hypothesis is another level of meta, which does not have much practical use, but may answer some questions.