I've implemented a C89 cognitive architecture that explores an alternative to both reward-maximization and loss-minimization paradigms. As an alternative, it uses local coherence as its learning mechanism.
Core Mechanism
The system learns by strengthening connections that increase coherence—defined mathematically as minimizing the delta between predicted and observed node activations:
This is a departure from standard approaches in that:
No global loss function or reward signal is required
Learning occurs locally at each connection
The system optimizes for consistency rather than maximization
Implementation
The architecture consists of five specialized processing units that simultaneously evaluate different aspects of information.
The source code demonstrates how this achieves several capabilities usually needing more complicated algorithms:
Meta-cognition emerges from rotational self-reference patterns (Co-Processor Interconnects are auto-wiring too)
System-wide stability increases with node coherence (Health)
Adaptation occurs without explicit reward or punishment signals (No Energy Used Punishing)
Why this is important regarding Current AI Discussions
This implementation is a good indicator that recursive self-reference and multi-dimensional coherence might offer an alternative path to capabilities currently pursued through scaling and reward engineering.
It provides a concrete, running example of how a system might optimize for internal consistency rather than external rewards—relevant to current discussions on alternative training paradigms.
The full implementation in ANSI C89 is available for examination and testing.
I've implemented a C89 cognitive architecture that explores an alternative to both reward-maximization and loss-minimization paradigms. As an alternative, it uses local coherence as its learning mechanism.
Core Mechanism
The system learns by strengthening connections that increase coherence—defined mathematically as minimizing the delta between predicted and observed node activations:
"
node->coherence = (node->coherence * 0.9) + (1.0 - fabs(node->activation - node->value)) * 0.1;
"
This is a departure from standard approaches in that:
Implementation
The architecture consists of five specialized processing units that simultaneously evaluate different aspects of information.
The source code demonstrates how this achieves several capabilities usually needing more complicated algorithms:
Why this is important regarding Current AI Discussions
This implementation is a good indicator that recursive self-reference and multi-dimensional coherence might offer an alternative path to capabilities currently pursued through scaling and reward engineering.
It provides a concrete, running example of how a system might optimize for internal consistency rather than external rewards—relevant to current discussions on alternative training paradigms.
The full implementation in ANSI C89 is available for examination and testing.