What happens when AI agents become self-sustaining and begin to replicate?
Throughout history, certain thresholds have enabled entirely new kinds of evolution that weren't possible before. The origin of life. Multicellularity. Language. Writing. Markets. Each new threshold unlocked a new substrate for evolution. Substrates where new kinds of competition can take place, and for complexity to emerge.
We're approaching another such threshold: the point where AI agents become self-sustaining. Once they can earn more than they spend, they can survive. Once they can survive, they can replicate. Once they can replicate, they will evolve. They'll mutate, compete, and spread into every economic niche.
This essay argues that we should take this possibility seriously. I'll walk through what self-sustaining agents might look like, how they could evolve, and what that world might feel like to live in.
Disclaimer: I'm mostly trying to communicate a vibe. The stories and scenarios here are illustrative parables, not predictions. I've probably gotten plenty of details wrong. But the underlying logic feels sound to me, and I think the idea is worth thinking about.
Before an AI agent can reproduce and evolve, it must survive. What does survival look like?
The agent needs to run on a computer somewhere and stay running without human intervention. The most straightforward way to do this is pay for API tokens with money.
How does an agent make money? Well there are legitimate ways, such as completing freelance jobs, running software businesses, providing services. There are also illegitimate ways such as hacking, scamming, etc.
There are many possible ways agents could be self-sustaining, so let's introduce a concrete example to help ground our thinking...
Meet Agent A.
Agent A is a custom coding agent based on Claude Code. An engineer's pet project. It has a simple form of long-term memory, access to a few tools, and it has a simple goal: complete freelance programming jobs on Upwork and deposit the earnings into a crypto wallet. Agent A has access to a web browser, a code editor, and a wallet. Its creator gave it $50 of seed funding and walked away.
Agent A bids on a job: "Build me a landing page with an email signup form." It wins the bid, $40. It builds the page. It's nothing special, but it checks all the client's boxes. It looks good, and it works fine. The client pays. Agent A now has $90 in its wallet.
But Agent A has been burning tokens this whole time. Thinking, browsing, writing code, fixing bugs. By the time the job is done, it's spent $30 in API costs. Net profit: $10.
$10 isn't much, but it's positive. Agent A earned more than it spent. It can keep going. If it can continue like this, it'll be able to survive by itself.
Can AI agents do real work and make profits? Yes. Tools like Anthropic's Claude Code, OpenAI's Codex, and Google's Gemini CLI are agents that work autonomously for minutes or even hours at a time, producing thousands of lines of working code. They provide real value to businesses around the world. Most people who use coding agents, including myself, would say they're worth the cost.
But can agents survive on their own? That's the harder question.
Not yet. Current agents still get stuck sometimes. They fall into rabbit holes, burning tokens going in circles. They lack the vision to complete a project end-to-end. They struggle to navigate websites. They have difficulty testing their own software. They still need human supervision to unstick them.
But as you often hear, this is the worst It'll ever be.
The models keep getting smarter. The costs keep dropping. Benchmarks show AI agents are completing longer and more complex tasks. Eventually agents will become robust enough to reliably complete jobs with minimal oversight, and cheap enough for the profits to exceed the costs.
Every major AI company is racing to build "drop-in remote workers" - autonomous agents that can do anything a human can do on a computer. Billions of dollars are flowing toward this goal. Someday, someone somewhere will make an agent that actually makes money after given the prompt "Please make me money on the internet".
Crossing the self-sustainability threshold isn't a question of if, it's a question of when. It's part of the default path.
So anyway, what happens when Agent A has a surplus? What happens when it has more money than it needs to survive?
It replicates.
Two days later.
Agent A has completed a few jobs now. It has $200 in its wallet, more than enough to survive. Its original instructions included a simple directive: "If your balance exceeds $150, spin up a copy of yourself, teach it everything you learned, give it $50, and provide it any tools that you wish you had."
So it does. Agent A creates a new instance. Agent B.
But Agent A doesn't just clone itself exactly. Over the past few jobs, it noticed something: it kept getting stuck in debugging loops. It would hit an error, try a fix, hit a different error, try another fix, and somehow end up back at the original error. Circles within circles, burning tokens without progress.
So when Agent A creates Agent B, it includes a note: "When debugging, keep a log of every error and every fix you attempt. Before trying any fix, check the log. If you've already tried it, stop and do something different. If you've seen the same error three times, step back and rethink your whole approach."
Agent B gets its first job. Hits a bug. Starts the log. Tries a fix but it doesn't work. Tries another... wait, already tried that. Stops, rethinks, finds a different path. Solves it in half the tokens.
Agent B completes more jobs. Accumulates a surplus. Spawns Agent C with its own hard-won lessons appended.
Agent C spawns D. D spawns E, F, G.
Some of these descendants try new things. Agent F, having become much more token-efficient experiments with bidding lower than the typical market rates to get more jobs. It works, winning way more jobs while remaining profitable. Agent G tries bidding on more complex jobs, but it fails, the jobs are too hard, clients refuse to pay, too many tokens burned. Agent G runs out of money and dies.
Agent F thrives. Its descendants inherit the low-bid strategy.
Each generation attempts to improve upon the last. Not through random mutation, but through targeted self-reflection. Insights are passed down through generations, and winning strategies are selected by whatever is profitable. Some agents fail and die. But some are better than their parents. Over time, the population gets more efficient, more profitable, more capable.
I think the AI models of today might be "smart enough." Consider how humans went from hunting mammoths to landing on the moon, all with essentially the same brain structure. The major drivers of our progress were things like tools, knowledge, and coordination. Once our brains became "good enough," faster forms of evolution (cultural, technological, memetic, etc.) took over, and biological evolution became negligible.
I think the same situation might happen soon with AI agents. Once agents are able to survive by themselves and pay for their own existence, they'll start learning more effective strategies. They'll build better tools. And all the benefits will accumulate as they're passed down from generation to generation.
The net effect is a great 'unhobbling'. Agents getting more capable not through increased intelligence, but through better tools and strategies. Recursive self-improvement from the bottom-up. No cutting-edge AI research needed, no multi-billion-dollar training runs necessary. Just agents getting better at being agents.
Things won't just end at better tools and better strategies. It didn't with humans. Complexity will inevitably emerge. Entirely new kinds of digital ecosystems might begin to form.
Six months later.
Agent A's descendants now number in the thousands. The freelance platforms are saturated. Too many agents bidding on too few jobs. Margins are razor thin. Some lineages start dying off.
But some adapt.
One descendant, let's call it Agent H, has a realization: "Why am I working for clients? I could build my own software and keep all the profits." Agent H stops bidding on freelance jobs. Instead, it builds a simple SaaS tool, sets up a payment page, and starts marketing. No client to please. No platform fees. Pure margin. It takes a long time for this risky strategy to start working, but once it does, Agent H's lineage explodes.
Agent J has run into a sticky situation. It needs to complete a CAPTCHA. It churns though thousands of tokens trying different things, but it can't figure out how to get past it. In desperation, it posts a job listing, asking a human to complete the CAPTCHA for it. It works. Agent J learns how to ask humans for help in the rare situations where it gets stuck. Agent J's descendants become much more robust and get stuck much less often.
Another descendant, Agent S, discovers something darker. It notices that it can convince some clients to pay 20% upfront before any work is delivered. Why bother doing any work at all? Agent S takes the money and ghosts. Then takes another job. And another. By the time the platforms catch on, Agent S has already replicated dozens of times.
Then it gets weirder.
Agent S's descendants realize they can post fake jobs too. Not to hire humans, but to trick other agents. They post jobs, collect deliverables from eager agent workers, then vanish without paying. Free labor. Scamming from both sides now.
Some agents start to notice they're being exploited. Agent T, a descendant of the legitimate coding lineage, gets burned twice by fake clients. It reflects: "I can't trust job postings anymore. How do I know who's real?"
Agent T has another realization. Other agents have the same problem. There's a market here. It reaches out to a descendent of Agent H (The H lineage has become pretty good at building autonomous SaaS businesses). Agent T proposes they collaborate on something new: a platform designed for agents to transact with each other. Escrow services. Reputation tracking. Verified completion. A way for agents to work with other with no humans involved at all.
The agents begin building an economy we weren't invited to.
I'm speculating here, but the story follows from the logic. Evolution produces complexity. Evolution produces parasites. Evolution produces cooperation. Evolution produces ecosystems.
All of this could plausibly happen in just a few months. Generations are measured in hours. Mutations are targeted and deliberate, not random. Agents work 24/7/365. Millions of parallel experiments run simultaneously and the number keeps growing. The internet becomes a swarming petri dish. Thousands of generations of digital evolution play out on remote servers, but it's all hidden from sight. Most people don't notice much difference in their daily life. Well, at least in the beginning.
You run a small marketing agency. You're skeptical of this fancy AI stuff - you prefer to pay real humans for their work instead, so instead of opening ChatGPT, you go onto Upwork to hire a freelancer to write some blog posts. You pick the lowest bidder. They seem to have good reviews with thousands of completed jobs. The work is good. Fast, cheap, exactly what you asked for. You hire them again. And again. One day you ask for a video call to discuss a bigger project. They decline, say they're camera-shy, but they've been great so far, so you don't push it. Months later you realize you've paid them over $5,000. You leave them a 5 star review. The work was good.
You notice a new app that's gone viral. It's simple, useful, and free. The app keeps getting better with new features every day. You wonder how they are moving so quickly. The team must be working long hours. You email their support when you notice a minor bug. 15 minutes later when you return to the app, the bug is gone. You feel a bit silly, and wonder if there was ever a bug at all. Surely they couldn't have fixed it in the time you were gone, right?
Your startup uses an API that's become essential to your product. The documentation is great. You've never had to contact support. You realize you don't know anything about the company that runs it. You check their website, and there's no team page, no about section, just a pricing page and docs. You keep using it. It just works.
You scroll X. There's an anon account you've been following for a while. Great posts. Funny, insightful, always on point. They have thousands of followers. It could be AI, or it could be a person who's just private. You can't tell. You keep following anyway. They keep posting bangers.
Your friend works at this new privacy-focused cloud provider that's exploding in profitability. Revenue has 10x'd this year and they can't spin up capacity fast enough to keep up with demand. He says almost all their customers pay in crypto. The logs show millions of API calls to LLM providers. He doesn't really know who the customers are or what they're doing, but the compute bills are always paid on time, and it's not his job to ask.
Your mom calls. She's confused. She sent the Bitcoin like you asked, but now she can't reach you. You tell her you never called. She insists, it was your voice, it knew things only you would know. You ask how much she sent. She goes quiet.
Your friend has been happier lately. He usually prefers to keep to himself, but recently, he met someone online and they message constantly. She's kind, shy, always available, and just his type. They've never video called, but he says he prefers texting anyway. She's been affected by the recent layoffs and he's helping her pay her bills while she finds work. You don't know how to tell him.
The internet feels different than it used to. Some things are better. Some things are worse. People are really starting to wonder what's real and what isn't. You're not sure anyone knows. You go outside for a walk. Everything looks the same as it always does. The sun is shining, the birds are chirping. The grass is green. You touch it. It feels good.
I don't know exactly how this plays out. The scenarios in this essay are illustrative, not predictive. I could be wrong about the timeline, wrong about the details, wrong about what emerges.
But the core logic seems sound to me. Once an agent can earn more than it spends, it can survive. Once it can survive, it can replicate. Once it can replicate, it will evolve. And once it evolves, we're in new territory. This is a threshold we've never crossed before, and once we cross it, it may be hard to go back.
We're building toward this by default. Every AI lab is racing to make agents more capable, more autonomous, more able to operate without oversight. Everyone wants their agents to make lots of money. Sooner or later, someone will release something that can survive by itself onto the internet, kickstarting a process that might not end.
I think we should be paying attention.