This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
We aim to build AI’s underlying logic from scratch, based on a new cognitive theory.
First, we must consolidate the fundamental flaws of existing AI. Current AI, dominated by deep learning and built upon cognitive theories like ACT-R and PP, suffers from a foundational flaw: the objects they manipulate are not cognitively primitive. LLMs tokenize words, ACT-R manipulates event chains, PP works on predictions—these are highlevel derivatives, not bedrock constituents. How can we expect to build an AGI that understands the nature of things from a foundation of prefabricated blocks?
More precisely, they are “molecular-level” theories: they describe interactions between macroscopic functional blocks. They work well when assembling known structures, but inevitably fail when recombination or extension is required. What we need is an “atomic-level” theory: one that explains how functional molecules (e.g., objective entities) are built from more basic cognitive units, how molecules can be decomposed, and how entirely new molecules can be constructed from atoms—and thereby predict the properties of novel structures.
Starting from first principles, we have constructed the **Weight‑Calculatism** cognitive theory—returning to the most essential and intuitive phenomena, re‑examining how we think, aiming to uncover the common processes and substrate underlying all cognition. At this point, we appear to have arrived at a theory of remarkable simplicity and explanatory power. Like all new theories, it currently lacks substantial empirical support, but we believe it serves as an excellent heuristic framework, providing a reference and target for criticism for subsequent theories. The full theory is documented on GitHub. [https://github.com/Ergodicist/Weight-Calculatism-cognitive-theory]
Due to the language gap, I have to translate my theory with AI, so it might be a little strange. But the originality and novelty of the theory is guaranteed for sure. As my first article, I'm so sorry that I can’t deliver the whole discussion because the restriction of AI translation, which makes this article a cheat. I promise I’ll release the rest as soon. if you like, you can visit [https://doi.org/10.48550/arXiv.2512.03072] for further discussion about implement details now.
We aim to build AI’s underlying logic from scratch, based on a new cognitive theory.
First, we must consolidate the fundamental flaws of existing AI. Current AI, dominated by deep learning and built upon cognitive theories like ACT-R and PP, suffers from a foundational flaw: the objects they manipulate are not cognitively primitive. LLMs tokenize words, ACT-R manipulates event chains, PP works on predictions—these are highlevel derivatives, not bedrock constituents. How can we expect to build an AGI that understands the nature of things from a foundation of prefabricated blocks?
More precisely, they are “molecular-level” theories: they describe interactions between macroscopic functional blocks. They work well when assembling known structures, but inevitably fail when recombination or extension is required. What we need is an “atomic-level” theory: one that explains how functional molecules (e.g., objective entities) are built from more basic cognitive units, how molecules can be decomposed, and how entirely new molecules can be constructed from atoms—and thereby predict the properties of novel structures.
Starting from first principles, we have constructed the **Weight‑Calculatism** cognitive theory—returning to the most essential and intuitive phenomena, re‑examining how we think, aiming to uncover the common processes and substrate underlying all cognition. At this point, we appear to have arrived at a theory of remarkable simplicity and explanatory power. Like all new theories, it currently lacks substantial empirical support, but we believe it serves as an excellent heuristic framework, providing a reference and target for criticism for subsequent theories. The full theory is documented on GitHub. [https://github.com/Ergodicist/Weight-Calculatism-cognitive-theory]
Due to the language gap, I have to translate my theory with AI, so it might be a little strange. But the originality and novelty of the theory is guaranteed for sure. As my first article, I'm so sorry that I can’t deliver the whole discussion because the restriction of AI translation, which makes this article a cheat. I promise I’ll release the rest as soon. if you like, you can visit [https://doi.org/10.48550/arXiv.2512.03072] for further discussion about implement details now.