This is an automated rejection. No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
We aim to build AI’s underlying logic from scratch, based on a new cognitive theory.
To do so, we first consolidate the fundamental flaws of existing AI. Current AI, dominated by deep learning and built upon cognitive theories like ACTR and PP, suffers from a foundational flaw: the objects they manipulate are not cognitively primitive. LLMs tokenize words, ACTR manipulates event chains, PP works on predictions—these are highlevel derivatives, not bedrock constituents. How can we expect to build an AGI that understands the nature of things from a foundation of prefabricated blocks?
More precisely, they are **molecular‑level theories**: they describe interactions between macroscopic functional blocks. They work well when assembling known structures, but inevitably fail when recombination or extension is required. What we need is an **atomic‑level theory**: one that explains how functional molecules (e.g., objective entities) are built from more basic cognitive units, how molecules can be decomposed, and how entirely new molecules can be constructed from atoms—and thereby predict the properties of novel structures.
Starting from first principles, we have constructed the **Weight‑Calculatism** cognitive theory—returning to the most essential and intuitive phenomena, re‑examining how we think, aiming to uncover the common processes and substrate underlying all cognition. At this point, we appear to have arrived at a theory of remarkable simplicity and explanatory power. Like all new theories, it currently lacks substantial empirical support, but we believe it serves as an excellent heuristic framework, providing a reference and target for criticism for subsequent theories. Here we focus on its application to AI architecture; the full theory is documented on GitHub. Please visit https://github.com/Ergodicist/Weight-Calculatism-cognitive-theory
Now we can begin the discussion of the Weight‑Calculative AI architecture.
In this article, we will focuse on the implementation principles of the Weight-Calculative AI Architecture—The Weight-Calculatism cognitive theory, explaining how it resolves these issues. Please visit (https://doi.org/10.48550/arXiv.2512.03072) for further discussion about implement details. Weight-Calculatism comprises three interlocking components: Logical Atoms, Logical Operations, and the Weight-Calculation Engine.
3.1 LOGICAL ATOMS: THE SUBSTRATE OF COGNITION
We posit that intelligence must be built upon stable, interpretable primitives. In Weight-Calculatism theory, Logical Atoms are the fundamental units of cognition. Human cognition ultimately grounds out in intuitive and emotional experiences; what Logical Atoms involve are precisely these conscious experiences and their combinations. Objective entities are not fundamental enough; they are collections of properties, not the most primitive elements of cognition. Here, knowledge-based memories constitute the long-term information repository; episodic memories, i.e., memories of event sequences, essentially involve the ordered storage of several Logical Atoms, which can be names, actions, etc. This constitutes a response to the two types of memory in ACT-R.
For humans, a Logical Atom is a piece of information closely tied to a conscious experience, generated through information input, classification, and storage. Furthermore, information isn't only input from the outside; processing existing information through computation can also generate new information and new concepts, stored as new Logical Atoms. This is the basis for actively expanding and deepening cognition.
Primary, primitive Logical Atoms are generally indivisible. Through processing and integration, primary atoms can evolve to become complex and high-level. The information from a series of Logical Atoms can participate in computation as a whole, which can be termed a higher-level atom. Based on their level of abstraction, they can be roughly divided into three categories:
The most fundamental and concrete are conscious experiences, which originate from perception.
The patterns and regularities of certain conscious experiences—i.e., the common results of computations performed on perceptions—are the properties of things. This is a more abstract category. For example, "universality" is an abstract pattern induced after experiencing phenomena like "Events A, B, and C share the same preconditions, and their outcomes are also the same": it is itself the result of computation on conscious experiences.
Objective entities are collections of properties. This is paradoxically the most abstract and advanced category. For cognition, an objective entity is merely a collection of properties, including concepts crucial to constituting an "object," such as conservation of matter or continuity of motion.
These three categories of basic logical objects originate from perception, computational induction, and integration (broader computation and relation), respectively.
For instance, "position" in our cognition represents the experience, within visual and related perceptions, of an object's form remaining constant while its surrounding environment changes. "Space" represents the totality of positions where objects might exist. "Time" represents the similarities and differences felt between repetitive events (like the alternation of day and night), directly related to the accumulation of memory, and thus possessing continuity. To enable AI understanding, we must construct this entire set of related sensations and patterns in a computationally tractable way, rather than merely representing nouns and their statistical regularities.
Compared to the macroscopic "chunks" in ACT-R, the objects in Weight-Calculatism are more fundamental and concise. To build truly powerful AI, its stored knowledge base cannot remain at the level of nouns like space and time but must deeply construct the relevant patterns and fundamental modes of interaction to achieve genuine understanding.
3.2 LOGICAL OPERATIONS: OPERATING ON THE COGNITIVE SUBSTRATE
This section concerns how Logical Atoms interact. Weight-Calculatism theory posits that all logical relations decompose into two fundamental operations: Pointing and Comparison.
Pointing is the activation of one Logical Atom by another. When two Atoms (pieces of information) are strongly related, this operation exists between them. Which Atoms can undergo Pointing needs to be learned and stored beforehand. It embodies relationships: If A, then B. If A and B are events, it can be a "leads to" or "caused by" relationship; if A and B are objects or properties, it can be a "correlates with" relationship, meaning A has property B or property A belongs to B. The "association" in deep learning is implicit, statistical correlation within network weights. In contrast, "Pointing" is an explicit, symbolic link that can itself be inspected and reasoned about.
Comparison involves comparing two streams of information and outputting a "Same" or "Different" result. Identifying crucial differences requires numerous comparison operations on various details. The difference between existence and non-existence (of some qualities) is the foundation of perception and definition; the similarity between two objects is the foundation of induction and analogy. An AI can compare various characteristics of things but doesn't inherently know what these characteristics ultimately signify or what inferences can be drawn.
The objects participating in operations are not only single Logical Atoms but can also be combinations of several atoms, such as "AND" and "OR". For computers, this is implemented by basic logic gates, which need not be discussed in depth here.
The cognitive process is realized by an asynchronous activation propagation algorithm. When a Logical Atom is activated, it triggers all its connected Pointing operations in parallel, propagating activation signals to downstream atoms. Simultaneously, a central working memory collects highly activated atoms and invokes Comparison operations to evaluate them. The entire process lacks a central controller for serial scheduling; the dynamics of cognition emerge from this concurrent, activation-based propagation network.
For example, when the Logical Atom for "smoke" is activated, it deterministically Points to its cause ("fire") and its properties ("toxic"), thereby activating the related atoms. It also Points to its gaseous property and subsequently to the laws governing its next changes ("rising" and "diffusing"). This is not merely the statistical regularity that "smoke is strongly correlated with fire" but the execution of a logical operation. It embodies causality, correlation, and property association in a directly interpretable manner, forming the bedrock of causal reasoning and associative memory.
By feeding the final output information to another processing module that converts it into language or action, a complete chain of learning, thinking, and feedback is formed. The processing and expressive capabilities of the Weight-Calculative AI architecture are more complete, enabling complex symbolic reasoning with a more concise structure. Using deterministic logical relations as the core algorithm is more accurate, efficient, and generalizable compared to vector decomposition and probability calculations.
3.3 WEIGHT-CALCULATION: THE DECISION-MAKING MODEL
Logical Atoms and Operations constitute the "thinking" part of the system, while the Weight-Calculation Engine is responsible for "decision-making" and "action." It is an interpretable, human-like decision-making model that unifies rational calculation and emotional drive. The core formula is:
Weight = Benefit × Probability
In the Weight-Calculatism cognitive theory, the actual process is: the brain assigns weights to objects (events, emotions) based on information and fundamental requirements dictated by genes (survival instinct), and then performs calculations. Here, 'Weight' is the result of the computation, representing the priority the brain assigns to an object; it is the significance we assign, not merely its objective value. 'Benefit = Gain - Loss'. Information is processed and computed by the brain to derive the perceived probability of an event occurring, or we can resume . The comparison of weights corresponding to different potential actions is reflected in consciousness, ultimately determining an individual's thoughts and final behavioral tendencies.
The Weight-Calculation formula describes the human decision-making process, and its objects should be events and (felt) emotions, not physical objects, concepts, or properties. For instance, "money" itself cannot be plugged into this formula, but "obtaining money" can. This event Points to "being able to acquire desired things," which in turn Points to "satisfying material desires"—an initial weight determined by genetic instinct. The specific weight of "obtaining money" is related to its relevance (e.g., the amount of money).
The true power of the Weight-Calculation formula lies in the fact that Benefit can always be decomposed along the causal chain of events into other weights, ultimately tracing back to Initial Weights. Therefore, the weight value of any event must be expressible as:
Weight = Σ (Initial Weightᵢ × Relevanceᵢ)
Weight-Calculatism quantitatively incorporates emotion into the weight calculation, positing that emotion itself is generated when the calculation process matches specific patterns. For example:
Acquiring a liked object produces a positive weight, leading to pleasure.
If the actual value exceeds the predicted value, a dopamine-like signal is released. It doesn't directly cause pleasure but enhances motivational strength, driving the agent to act for "acquisition," replicating this unexpected gain.
If an event, left to its natural course, is likely to cause harm to the self, fear is felt, driving the agent to pay attention and make decisions to change the situation.
If this is caused by another agent, anger might also be felt, driving the agent to retaliate to achieve deterrence and prevent future attacks.
The generation of emotion (sensibility) isn't solely about the magnitude of the weight; it involves multiple abstract evaluation systems.
For humans, Initial Weights are determined by genes and instinct. To use the Weight-Calculatism cognitive architecture for building value-aligned decision-making AI, we need only construct a complete, reasonable library of Initial Weights and simulate the patterns that generate emotions. By modifying the Initial Weights, we can easily alter its "personality" and behavioral traits.
The Weight-Calculatism architecture provides a clear, non-anthropomorphic implementation path for affective computing, transforming it from mysterious "qualia" into a designable algorithmic module.
The architecture naturally explains reinforcement learning mechanisms: when the actual benefit (B_actual) resulting from an action significantly exceeds its expected benefit (B_expected), the system triggers a reinforcement signal analogous to dopamine. This signal does not directly produce pleasure but permanently strengthens the weight or relevance of the "Pointing" chain that led to the successful outcome, making the Weight (W) of that decision path higher in similar future situations. This implements learning from experience and explains the origin of intrinsic motivations like "the pursuit of surprise."
Due to space constraints, only the theoretical principles and fundamental perspective breakthroughs of the Weight-Calculative AI architecture are presented here. A sufficiently sound cognitive theory should suffice to illustrate the point. Feel free to share your perspective, whether positive or negative.
We aim to build AI’s underlying logic from scratch, based on a new cognitive theory.
To do so, we first consolidate the fundamental flaws of existing AI. Current AI, dominated by deep learning and built upon cognitive theories like ACTR and PP, suffers from a foundational flaw: the objects they manipulate are not cognitively primitive. LLMs tokenize words, ACTR manipulates event chains, PP works on predictions—these are highlevel derivatives, not bedrock constituents. How can we expect to build an AGI that understands the nature of things from a foundation of prefabricated blocks?
More precisely, they are **molecular‑level theories**: they describe interactions between macroscopic functional blocks. They work well when assembling known structures, but inevitably fail when recombination or extension is required. What we need is an **atomic‑level theory**: one that explains how functional molecules (e.g., objective entities) are built from more basic cognitive units, how molecules can be decomposed, and how entirely new molecules can be constructed from atoms—and thereby predict the properties of novel structures.
Starting from first principles, we have constructed the **Weight‑Calculatism** cognitive theory—returning to the most essential and intuitive phenomena, re‑examining how we think, aiming to uncover the common processes and substrate underlying all cognition. At this point, we appear to have arrived at a theory of remarkable simplicity and explanatory power. Like all new theories, it currently lacks substantial empirical support, but we believe it serves as an excellent heuristic framework, providing a reference and target for criticism for subsequent theories. Here we focus on its application to AI architecture; the full theory is documented on GitHub. Please visit https://github.com/Ergodicist/Weight-Calculatism-cognitive-theory
Now we can begin the discussion of the Weight‑Calculative AI architecture.
In this article, we will focuse on the implementation principles of the Weight-Calculative AI Architecture—The Weight-Calculatism cognitive theory, explaining how it resolves these issues. Please visit (https://doi.org/10.48550/arXiv.2512.03072) for further discussion about implement details. Weight-Calculatism comprises three interlocking components: Logical Atoms, Logical Operations, and the Weight-Calculation Engine.
3.1 LOGICAL ATOMS: THE SUBSTRATE OF COGNITION
We posit that intelligence must be built upon stable, interpretable primitives. In Weight-Calculatism theory, Logical Atoms are the fundamental units of cognition. Human cognition ultimately grounds out in intuitive and emotional experiences; what Logical Atoms involve are precisely these conscious experiences and their combinations. Objective entities are not fundamental enough; they are collections of properties, not the most primitive elements of cognition. Here, knowledge-based memories constitute the long-term information repository; episodic memories, i.e., memories of event sequences, essentially involve the ordered storage of several Logical Atoms, which can be names, actions, etc. This constitutes a response to the two types of memory in ACT-R.
For humans, a Logical Atom is a piece of information closely tied to a conscious experience, generated through information input, classification, and storage. Furthermore, information isn't only input from the outside; processing existing information through computation can also generate new information and new concepts, stored as new Logical Atoms. This is the basis for actively expanding and deepening cognition.
Primary, primitive Logical Atoms are generally indivisible. Through processing and integration, primary atoms can evolve to become complex and high-level. The information from a series of Logical Atoms can participate in computation as a whole, which can be termed a higher-level atom. Based on their level of abstraction, they can be roughly divided into three categories:
The most fundamental and concrete are conscious experiences, which originate from perception.
The patterns and regularities of certain conscious experiences—i.e., the common results of computations performed on perceptions—are the properties of things. This is a more abstract category. For example, "universality" is an abstract pattern induced after experiencing phenomena like "Events A, B, and C share the same preconditions, and their outcomes are also the same": it is itself the result of computation on conscious experiences.
Objective entities are collections of properties. This is paradoxically the most abstract and advanced category. For cognition, an objective entity is merely a collection of properties, including concepts crucial to constituting an "object," such as conservation of matter or continuity of motion.
These three categories of basic logical objects originate from perception, computational induction, and integration (broader computation and relation), respectively.
For instance, "position" in our cognition represents the experience, within visual and related perceptions, of an object's form remaining constant while its surrounding environment changes. "Space" represents the totality of positions where objects might exist. "Time" represents the similarities and differences felt between repetitive events (like the alternation of day and night), directly related to the accumulation of memory, and thus possessing continuity. To enable AI understanding, we must construct this entire set of related sensations and patterns in a computationally tractable way, rather than merely representing nouns and their statistical regularities.
Compared to the macroscopic "chunks" in ACT-R, the objects in Weight-Calculatism are more fundamental and concise. To build truly powerful AI, its stored knowledge base cannot remain at the level of nouns like space and time but must deeply construct the relevant patterns and fundamental modes of interaction to achieve genuine understanding.
3.2 LOGICAL OPERATIONS: OPERATING ON THE COGNITIVE SUBSTRATE
This section concerns how Logical Atoms interact. Weight-Calculatism theory posits that all logical relations decompose into two fundamental operations: Pointing and Comparison.
Pointing is the activation of one Logical Atom by another. When two Atoms (pieces of information) are strongly related, this operation exists between them. Which Atoms can undergo Pointing needs to be learned and stored beforehand. It embodies relationships: If A, then B. If A and B are events, it can be a "leads to" or "caused by" relationship; if A and B are objects or properties, it can be a "correlates with" relationship, meaning A has property B or property A belongs to B. The "association" in deep learning is implicit, statistical correlation within network weights. In contrast, "Pointing" is an explicit, symbolic link that can itself be inspected and reasoned about.
Comparison involves comparing two streams of information and outputting a "Same" or "Different" result. Identifying crucial differences requires numerous comparison operations on various details. The difference between existence and non-existence (of some qualities) is the foundation of perception and definition; the similarity between two objects is the foundation of induction and analogy. An AI can compare various characteristics of things but doesn't inherently know what these characteristics ultimately signify or what inferences can be drawn.
The objects participating in operations are not only single Logical Atoms but can also be combinations of several atoms, such as "AND" and "OR". For computers, this is implemented by basic logic gates, which need not be discussed in depth here.
The cognitive process is realized by an asynchronous activation propagation algorithm. When a Logical Atom is activated, it triggers all its connected Pointing operations in parallel, propagating activation signals to downstream atoms. Simultaneously, a central working memory collects highly activated atoms and invokes Comparison operations to evaluate them. The entire process lacks a central controller for serial scheduling; the dynamics of cognition emerge from this concurrent, activation-based propagation network.
For example, when the Logical Atom for "smoke" is activated, it deterministically Points to its cause ("fire") and its properties ("toxic"), thereby activating the related atoms. It also Points to its gaseous property and subsequently to the laws governing its next changes ("rising" and "diffusing"). This is not merely the statistical regularity that "smoke is strongly correlated with fire" but the execution of a logical operation. It embodies causality, correlation, and property association in a directly interpretable manner, forming the bedrock of causal reasoning and associative memory.
By feeding the final output information to another processing module that converts it into language or action, a complete chain of learning, thinking, and feedback is formed. The processing and expressive capabilities of the Weight-Calculative AI architecture are more complete, enabling complex symbolic reasoning with a more concise structure. Using deterministic logical relations as the core algorithm is more accurate, efficient, and generalizable compared to vector decomposition and probability calculations.
3.3 WEIGHT-CALCULATION: THE DECISION-MAKING MODEL
Logical Atoms and Operations constitute the "thinking" part of the system, while the Weight-Calculation Engine is responsible for "decision-making" and "action." It is an interpretable, human-like decision-making model that unifies rational calculation and emotional drive. The core formula is:
Weight = Benefit × Probability
In the Weight-Calculatism cognitive theory, the actual process is: the brain assigns weights to objects (events, emotions) based on information and fundamental requirements dictated by genes (survival instinct), and then performs calculations. Here, 'Weight' is the result of the computation, representing the priority the brain assigns to an object; it is the significance we assign, not merely its objective value. 'Benefit = Gain - Loss'. Information is processed and computed by the brain to derive the perceived probability of an event occurring, or we can resume . The comparison of weights corresponding to different potential actions is reflected in consciousness, ultimately determining an individual's thoughts and final behavioral tendencies.
The Weight-Calculation formula describes the human decision-making process, and its objects should be events and (felt) emotions, not physical objects, concepts, or properties. For instance, "money" itself cannot be plugged into this formula, but "obtaining money" can. This event Points to "being able to acquire desired things," which in turn Points to "satisfying material desires"—an initial weight determined by genetic instinct. The specific weight of "obtaining money" is related to its relevance (e.g., the amount of money).
The true power of the Weight-Calculation formula lies in the fact that Benefit can always be decomposed along the causal chain of events into other weights, ultimately tracing back to Initial Weights. Therefore, the weight value of any event must be expressible as:
Weight = Σ (Initial Weightᵢ × Relevanceᵢ)
Weight-Calculatism quantitatively incorporates emotion into the weight calculation, positing that emotion itself is generated when the calculation process matches specific patterns. For example:
Acquiring a liked object produces a positive weight, leading to pleasure.
If the actual value exceeds the predicted value, a dopamine-like signal is released. It doesn't directly cause pleasure but enhances motivational strength, driving the agent to act for "acquisition," replicating this unexpected gain.
If an event, left to its natural course, is likely to cause harm to the self, fear is felt, driving the agent to pay attention and make decisions to change the situation.
If this is caused by another agent, anger might also be felt, driving the agent to retaliate to achieve deterrence and prevent future attacks.
The generation of emotion (sensibility) isn't solely about the magnitude of the weight; it involves multiple abstract evaluation systems.
For humans, Initial Weights are determined by genes and instinct. To use the Weight-Calculatism cognitive architecture for building value-aligned decision-making AI, we need only construct a complete, reasonable library of Initial Weights and simulate the patterns that generate emotions. By modifying the Initial Weights, we can easily alter its "personality" and behavioral traits.
The Weight-Calculatism architecture provides a clear, non-anthropomorphic implementation path for affective computing, transforming it from mysterious "qualia" into a designable algorithmic module.
The architecture naturally explains reinforcement learning mechanisms: when the actual benefit (B_actual) resulting from an action significantly exceeds its expected benefit (B_expected), the system triggers a reinforcement signal analogous to dopamine. This signal does not directly produce pleasure but permanently strengthens the weight or relevance of the "Pointing" chain that led to the successful outcome, making the Weight (W) of that decision path higher in similar future situations. This implements learning from experience and explains the origin of intrinsic motivations like "the pursuit of surprise."
Due to space constraints, only the theoretical principles and fundamental perspective breakthroughs of the Weight-Calculative AI architecture are presented here. A sufficiently sound cognitive theory should suffice to illustrate the point. Feel free to share your perspective, whether positive or negative.