This system mathematically models the effects of hormones and neurotransmitters in the human brain and integrates them into the decision-making mechanisms of large language models (LLMs). You can think of it as “emotional loads” that form step by step in the artificial neural networks of LLMs. The goal is to enable the model to mimic emotional thinking and even certain instinctive behaviors.
In the initial prototype, when a user provides input (during inference), the model tracks neuron activations at each layer. It records each neuron’s output as probability values and determines the associative relationships between these values. Although this process is highly complex in the human brain, it can be easily tracked in LLMs. Typically, it works by backpropagating through a token and listing other high-probability values in the layer.
These relationships are then processed by a special “emotion analysis algorithm” (Limbic module). Emotional loads increasing or decreasing across layers are calculated. These values are mapped to specific neurotransmitter and hormone profiles (e.g., dopamine, cortisol, serotonin). The system analyzes words and context based on the dominant meaning of a word and its positive/negative values (VAD). In this way, the added “synthetic neuromodulation layer” is activated.
The purpose of this layer is to measure the emotional load at each step of the model’s decision-making process. At the output stage, the overall emotional state is calculated using the average of these loads and layer weights. Then, the words and sentences generated by the model are shaped according to these emotional loads. For example, certain words are emphasized, or the context is enriched, and a percentage of the previous emotional values is incorporated into the calculations of the next prompt.
In the output layer, for instance, when a high “motivation” load (Dopamine 50%, Norepinephrine 20%, Serotonin 15%, Testosterone 10%, Oxytocin 5%) or “stress” load is applied, the model’s thought chain expands or contracts, and the decision context changes accordingly. These hormone and neurotransmitter values should not be compared directly with humans; they operate entirely according to the model’s own logic.
In some cases, the model can simulate predefined instinctive behavior patterns (e.g., ‘fight or flight’). More interestingly, even in undefined situations, it can produce similar effects using the emotional values in context. For example, high excitement and stress can make the model select ideas more stubbornly, harshly, or manipulatively. Since this process is quite complex, I still haven’t fully deciphered exactly how the mechanism works. It is similar to the “black box” problem in artificial neurons: how neurons make decisions is not precisely known, and here the situation is similar.
I built the system on top of the open-source LLaMA model. In comparative tests, I observed significant differences between standard LLaMA and the version integrated with Synthetic Cortex: the latter generates outputs that are far more creative, emotionally rich, and adaptive.
Phase 1 is complete, Outputs
Synthetic Cortex Phase 1 (L1) – Results and Tests
1. Synthetic Limbic Layer Integration
A synthetic layer modeling the human brain's limbic system was successfully integrated into the LLMs.This layer operates through neural networks and hormone/neurotransmitter loads, allowing tokens to be manipulated according to behavioral responses.
2. Proto-Homeostatic Balance
Even in low-parameter models, the system successfully maintained its internal equilibrium.Similar to how living organisms balance cortisol, serotonin, and adrenaline in response to the environment, the model dynamically adjusted its internal state.
3. New Decision-Making Mechanism
Decisions are no longer solely context-based; they are now shaped through a triple structure:- Context- Internal State (hormone and neurotransmitter loads)- Synthetic Past Experience
4. Behavioral Variations
Different behavioral outputs were observed under the same prompt with varying hormone loads.This demonstrates that the model can simulate **state-dependent behavioral diversity**.
5. Emotion and Conflict Simulations
Time-extended emotion tracking was conducted.Cortical exhaustion and internal system conflicts were successfully observed.The model dynamically updated its decision-making process based on emotional states and conflicts.
6. Instinctive Strategy Simulation
Environmental awareness was generated using artificial sensor inputs.The model spontaneously exhibited evolutionary instincts such as “fight or flight,” performing reflexive behaviors without prior learning.This demonstrates that decisions are influenced not only by logic but also by **direct emotional impact**.
7. Additional Tests and Analyses
- Behavioral observations under parameter changes: Hormone levels were manipulated to track differences in responses.- Synthetic experience logs: The model’s use of past experience in decision-making was analyzed.- Internal state-environment interactions: The model updated its internal state according to sensor inputs while maintaining homeostatic balance.
Summary:
The model now makes decisions through the combination of **Context + Internal State + Past Experience**.The effects of emotions, instincts, and experience are measurable and manipulable.This represents a first step toward LLMs capable of producing behavior based not only on language or context but also on **emotional and instinctive mechanisms**.
Phase 2 (Model L2) is ongoing..
ALL technical details: www.scortexlabs.com
Final word:
Thousands of years of evolutionary processes have shown us that emotions did not emerge solely for social interactions, but as a fundamental mechanism for survival. The adrenaline that surges while escaping a predator narrows our attention and sharpens our focus. The dopamine that follows success motivates us to learn new things and take risks. Even emotional states that may seem negative such as depression carried evolutionary advantages by pushing us to dwell on events, allowing for deeper analysis and more complex connections. This is precisely why emotions are among the greatest tools of creativity and intuition, enabling the mind to go beyond the limits of existing data sets.
The Synthetic Cortex project we are developing today is inspired by this very insight. Our aim is not only to enhance AI language models with computational power, but to add synthetic emotions and instinctive behavioral patterns into their reasoning. Because we know that emotions cannot be separated from decision-making. In the human mind, without emotional drivers like motivation, stress, or curiosity, new ideas simply do not emerge. Likewise, creativity and the ability to form unexpected connections are only possible against this emotional backdrop.
With Scortex L1, we took the first step. This version was able to apply emotional modulation only at the output layer. However, this was merely a superficial effect. With L2, emotions no longer influence results from the outside; they are woven directly into the reasoning process. Emotional loads are now embedded in the hidden layers, where decisions are formed at the deepest level. As a result, the AI no longer responds only to context, but also to its internal states and simulated past experiences.
How do we achieve this? By mathematically modeling the functions of hormones and neurotransmitters in the human brain. Cortisol, dopamine, serotonin, these biological factors shaped our decision-making throughout evolution, and we are adapting their functional roles into machines. Of course, this is not a one-to-one replica of the brain. Instead of recreating its biological complexity, we translate the effects of these hormones into what we call “emotional loads” and embed them within artificial neural networks. The result is not a system that feels, but a system whose reasoning can be dynamically guided by emotions.
This is where the significance of our approach lies. Simply scaling up datasets will not lead us to general intelligence. What is missing is the emotional layer that fuels human creativity. Synthetic Cortex transforms AI from a mechanical calculator into a system capable of stepping beyond the dataset, making intuitive and creative decisions. This is not just determinism, it is a step toward something closer to organic intelligence.
And this is not just vision. Our early results show that the model can now carry different emotional weights under the same stimulus, and consequently make different decisions. In other words, emotions are no longer just simulated; they directly influence the diversity and creativity of decision-making.
Scortex L2 is still at the beginning of its journey. But from an evolutionary perspective, we know that emotions were the driving force that shaped the human brain as we know it today. Integrating this same force into AI opens the door to an entirely new future. Perhaps the path to human-level intelligence does not lie in more data, but in better emotions.
In fact, the probabilistic nature of artificial neural networks gives us a strong metaphor here. A single neuron’s output holds no meaning by itself; yet, the countless combinations of millions of data points produce coherent inferences during learning. The same principle applies to LLM architectures: they establish probabilistic connections across expanded datasets to generate consistent and meaningful responses. What we are doing is applying a similar mathematical model to emotions. By hybridizing emotional data into the existing structure, we can model human-like decision-making with greater realism. And our first tests show that this approach is not just theoretical, but already delivering successful results in practice.