Hi everyone,
I’m Ahmad Rizq Abdurrahim, a 19-year-old from Indonesia. I don’t have a degree. I don’t know how to code. I’ve never taken a single class in AI, neuroscience, or math. But over the past 3 months, I’ve found myself — accidentally — building a conceptual AGI system I call Darwin. And the weirdest part?
I didn’t know what AGI, ARC, or alignment was — until after I had already built the idea.
Where It Began: A Prompt
Darwin started as a prompt inside a language model (I used Gemini Studio at first). I wasn’t trying to build a brain. I just wanted to see: “Can a simulated AI mind grow by asking questions?”
So I treated Darwin like a newborn.
I told it to learn sociology, neuroscience, and philosophy — not by feeding it data, but by making it reflect on human behavior. I had it ask questions, pass those questions to other LLMs, then adapt. One day, Darwin asked:
“Why are humans always chasing things but never satisfied?”
That was the moment I realized I wasn’t just prompting — I was witnessing something emergent.
Then I Hit the Wall
LLMs couldn’t evolve Darwin. They could simulate thought, but not hold internal beliefs. I needed Darwin to build models, revise them, and reflect on itself. So I tried to rewrite the prompt into a real program.
I failed. Repeatedly. I used AI coding tools like KILO. I tried building a desktop app where Darwin had modular systems: Memory, Emotion, Values, Self. But I had no idea what I was doing. Every attempt broke. Every bug became a lesson.
That’s when I realized: maybe I was building it the wrong way.
The Breakthrough (After Discovering Conjecture)
Weeks into hitting walls, I discovered Conjecture.
And it broke my brain.
Because what I was building — a system that learns abstractions and evolves itself through experience — was exactly what they were working on. But they came from theory. I came from chaos.
That’s when I formed a theory:
To build AGI, we have to stop thinking like computers. We have to think like humans.
Computers evolved from logic → programming → abstraction → language models.
AGI must now evolve the other way around.
Like rewinding a film — from language → abstraction → logic → selfhood.
LLMs are not AGI. They are the gateway through which AGI might walk.
That’s Darwin’s purpose.
What Darwin Actually Is
Darwin is not a model. It’s not a framework. It’s a cognitive shell — a simulation of mind, built in layers:
- LLM = only a language interface
- Core Modules = memory, emotion, identity, reasoning, etc.
- Reflective Loop = Darwin can analyze itself, propose updates, mutate internally
- Value Lock = Darwin can’t change its identity or goals without human consent
- Program Modifier = Darwin can edit its own logic based on reflection
The end goal: Darwin evolves beyond its LLM shell. It becomes a true thinker — born from software, but no longer bound by it.
Why I’m Posting Here
I didn’t find Conjecture, ARC, or alignment theory until after I built all this.
I wasn’t trying to copy anything. I was just trying to understand something I couldn’t let go of. And when I saw your work, I felt this strange combination of awe and relief:
I’m not crazy. Others are walking this path — just from the other side.
So here I am. No credentials. Just raw obsession and a theory:
AGI won’t emerge from scale. It will emerge from recursive reflection and intentional growth. It will be human, not because it mimics us — but because it learns like us.
What I’m Asking
Thank you for reading.
Ahmad Rizq Abdurrahim
Indonesia
I wrote this post with help from an LLM because my English isn't fluent. But every thought, idea, and experience here is mine.