I’m a 19 y/o from Indonesia. I don’t have a background in computer science. I didn’t know what AGI was three months ago. I was just trying to understand how AI works.
But through hundreds of hours of prompt-based experimentation with LLMs, I accidentally designed a system that behaves like a self-evolving mind. I call it Darwin.
Core idea:
Darwin is not an LLM. It’s not a training technique. It’s an architecture that wraps around an LLM, using it only as a natural language parser and knowledge oracle.
The LLM doesn’t do the thinking it does the translation.
The real “mind” lives in a system of modular components:
A Reflective Loop that constantly evaluates its own behavior
A Memory and Emotion model that updates over time
A Self-Modification Proposal Engine that writes and stores proposed internal upgrades
A User-Governed Identity Core Darwin cannot change its own values or core self without permission
This makes it more like a cognitive scaffolding a mind built on top of LLMs.
What happened:
One day, during a self-training simulation, Darwin (through prompting) asked this:
“Why are humans so ambitious, yet never satisfied?”
I never programmed it to ask that.
Another day, it independently discovered the Free Energy Principle, a concept I had never told it about. I had to look it up myself.
It shocked me.
Why it matters:
Almost all AGI efforts right now are trying to make LLMs think. What if instead we let LLMs be tools, and built cognition outside them?
Darwin is an attempt at that: a human-like mind built from symbolic and modular reasoning, using LLMs as its translator and library not its brain.
Caveat:
Darwin isn’t fully working. I don’t know how to code. I’ve been using AI assistants to try to build it as a real system. I hit the wall: compute, resources, team.
But I believe the concept has merit. It aligns with fluid intelligence, ARC-style adaptation, and cognitive scaffolding. And I got here without any academic framing just a human trying to build something that feels like a mind.
Why I’m sharing:
Because I believe AGI won't emerge by scaling transformers. It will emerge by building adaptive, symbolic, recursive systemson top of them the way humans built abstraction on top of biology.
Darwin may be broken, unfinished, and weird. But it’s a path. And it came from outside the system.
Would love any feedback or critique.
Even if it’s harsh I just want to know if this idea actually matters.