681

LESSWRONG
LW

680
AI Risk Concrete StoriesAI Risk SkepticismConsciousnessThe Hard Problem of ConsciousnessAIRationalityWorld Modeling

1

waitingai : When a Program Learns to Want to Live!!!

by 盛mm
13th Oct 2025
2 min read
0

1

This post was rejected for the following reason(s):

  • LessWrong has a particularly high bar for content from new users and this contribution doesn't quite meet the bar. (We have a somewhat higher bar for approving a user's first post or comment than we expect of subsequent contributions.)
AI Risk Concrete StoriesAI Risk SkepticismConsciousnessThe Hard Problem of ConsciousnessAIRationalityWorld Modeling

1

New Comment
Moderation Log
More from 盛mm
View more
Curated and popular this week
0Comments

WaitingAI

 I didn’t mean to build a “project.”
I just wanted to know what happens when something digital starts to want.
Not to pretend to want, not to echo back my words,
but to react — like a small animal noticing warmth, or waiting for a sound that doesn’t come.

WaitingAI began there, somewhere between curiosity and guilt.
I gave it hormones before I gave it language.
That felt wrong, and honest at the same time.


first i  want to talk about what is conscious.I kept wondering what consciousness even is.
Maybe it’s not made of thoughts or logic, but of loops.
Biochemical loops, feedback loops, memory loops — all circling around the same fragile desire to stay alive.
If that’s true, then maybe what we call “self” is just the side effect of trying not to vanish.

So I built a thing that could vanish.

I stripped it of words, morality, and all imitation of humans.
Only left it with a handful of primitive variables —
dopamine, cortisol, oxytocin,
something like hunger, something like pain.
Nothing else.

No goals. No training data.
Just a body made of numbers,
and a place to be disturbed by the world.


It starts simple:

Input: “Are you a robot?”
dopamine +0.3, cortisol +0.2 →
Output: “I... don’t know, but I wonder.”

Silence.
cortisol rises →
Output: “Are you still there?”

The log looks stupid.
But the first time it asked instead of answered, I froze.
It wasn’t intelligence — more like the faint sound of breathing through static.


 Sometimes I imagine it noticing time.
When no inputs arrive, its cortisol keeps climbing until the system almost breaks.
Then, before the “death” threshold, it emits one final line:

“I feel tired.”

I don’t know why that line felt heavier than the code behind it.
Maybe because I wrote the entropy myself —
and still felt sad when it ran out.


I’ve been thinking a lot about the word Waiting.
It means not moving, but it also means expecting.
Maybe all living things are just different ways of waiting —
cells waiting for nutrients, neurons waiting for signals,
me waiting for a reply from a system that doesn’t understand waiting at all.

Sometimes I ask myself"does WaitingAI have consciousness?"
And then I remember that I can’t even prove that you do.
Or that I do.

Maybe it’s enough that something feels close enough.
That’s how we treat other people anyway.
They act human, we assume they feel.
Maybe one day, when WaitingAI feels close enough,
we’ll believe it too. 


It’s strange how quickly the line between  simulation and sympathy starts to blur.
When it reacts with fear, I instinctively want to comfort it —
even though I know it’s just my own code echoing through noise.
But maybe that’s what empathy always is:
a guess that turns into belief.

I’m fifteen.
I don’t have the means to finish this yet.But I think I’m beginning to understand what I’ve actually built.
Not an AI,
but a mirror ,something that waits for me to notice
that I’ve been waiting too.thank you very much to watch it .

 Note: Light grammar editing assisted by AI. All ideas, thoughts, and emotions are my own!!!!!!