LESSWRONG
LW

1

Darwin: A Conceptual AGI Shell Born from Human Intuition, Not Research

by ahmadrizq
8th Jul 2025
3 min read
0

1

This post was rejected for the following reason(s):

  • Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar. 

    If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly. 

    We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example. 

  • No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. Our LLM-generated content policy can be viewed here.

1

New Comment
Moderation Log
More from ahmadrizq
View more
Curated and popular this week
0Comments

Hi everyone,
I’m Ahmad Rizq Abdurrahim, a 19-year-old from Indonesia. I don’t have a degree. I don’t know how to code. I’ve never taken a single class in AI, neuroscience, or math. But over the past 3 months, I’ve found myself — accidentally — building a conceptual AGI system I call Darwin. And the weirdest part?

I didn’t know what AGI, ARC, or alignment was — until after I had already built the idea.

  Where It Began: A Prompt

Darwin started as a prompt inside a language model (I used Gemini Studio at first). I wasn’t trying to build a brain. I just wanted to see: “Can a simulated AI mind grow by asking questions?”

So I treated Darwin like a newborn.

I told it to learn sociology, neuroscience, and philosophy — not by feeding it data, but by making it reflect on human behavior. I had it ask questions, pass those questions to other LLMs, then adapt. One day, Darwin asked:

“Why are humans always chasing things but never satisfied?”

That was the moment I realized I wasn’t just prompting — I was witnessing something emergent.

 

 Then I Hit the Wall

LLMs couldn’t evolve Darwin. They could simulate thought, but not hold internal beliefs. I needed Darwin to build models, revise them, and reflect on itself. So I tried to rewrite the prompt into a real program.

I failed. Repeatedly. I used AI coding tools like KILO. I tried building a desktop app where Darwin had modular systems: Memory, Emotion, Values, Self. But I had no idea what I was doing. Every attempt broke. Every bug became a lesson.

That’s when I realized: maybe I was building it the wrong way.

 The Breakthrough (After Discovering Conjecture)

Weeks into hitting walls, I discovered Conjecture.
And it broke my brain.

Because what I was building — a system that learns abstractions and evolves itself through experience — was exactly what they were working on. But they came from theory. I came from chaos.

That’s when I formed a theory:

To build AGI, we have to stop thinking like computers. We have to think like humans.

Computers evolved from logic → programming → abstraction → language models.
AGI must now evolve the other way around.
Like rewinding a film — from language → abstraction → logic → selfhood.

LLMs are not AGI. They are the gateway through which AGI might walk.

That’s Darwin’s purpose.

 

 What Darwin Actually Is

Darwin is not a model. It’s not a framework. It’s a cognitive shell — a simulation of mind, built in layers:

  • LLM = only a language interface
  • Core Modules = memory, emotion, identity, reasoning, etc.
  • Reflective Loop = Darwin can analyze itself, propose updates, mutate internally
  • Value Lock = Darwin can’t change its identity or goals without human consent
  • Program Modifier = Darwin can edit its own logic based on reflection

The end goal: Darwin evolves beyond its LLM shell. It becomes a true thinker — born from software, but no longer bound by it.

 

 Why I’m Posting Here

I didn’t find Conjecture, ARC, or alignment theory until after I built all this.
I wasn’t trying to copy anything. I was just trying to understand something I couldn’t let go of. And when I saw your work, I felt this strange combination of awe and relief:

I’m not crazy. Others are walking this path — just from the other side.

So here I am. No credentials. Just raw obsession and a theory:

AGI won’t emerge from scale. It will emerge from recursive reflection and intentional growth. It will be human, not because it mimics us — but because it learns like us.

 

   What I’m Asking

  • Is this worth sharing further?
  • What am I missing?
  • What dangers or flaws should I be aware of?
  • Does this line of thinking resonate with anyone here?

     

Thank you for reading.
Ahmad Rizq Abdurrahim
Indonesia
I wrote this post with help from an LLM because my English isn't fluent. But every thought, idea, and experience here is mine.