One of the greatest casualties in creative writing as of late is that LLM outputs have gotten so good you can't really tell by eye alone whether or not a piece is an AI output, you have to feed it through statistical analysis, and even then it's inexact. I certainly can't tell myself.
would be happy to give you some others of MY writings :)
https://www.lesswrong.com/posts/89yi2T4zayPp7sMid/ai-and-the-hidden-price-of-comfort
Not every generation had that luck
However arguably many generations and cultures were actually happy or at least satisfied with their immobilist way of life (ex. australian aborigines). Our fast moving world could have been a nightmare for someone raised in that sort of culture. Even many conservative people today will hate the singularity we're entering, whatever may be the end. Cultural clash / apocalypse could be another AI existential risk.
I would agree with you. I’m honestly torn about all this. Part of me thinks it’s incredible, part of me feels very uneasy. For me it’s not just about jobs or tech. I try to look at it from a philosophical stand point and my biggest worry is losing our human meaning. The drive, the motivation, the reasons for living. The struggle that we are trying to eliminate and automate that may lead to a Wall-E type of a society... We will see...
By Nik Popgeorgiev, February 27, 2026.
I can't help but think about the coming world. All generations would like to believe they are living in extraordinary times. We all want to be significant, to be the ones who witnessed the moment that changes everything. But let's be honest. Not every generation had that luck. For most of human history, the decor of life changed very little between the moment a person was born and the moment they died. The same fields, the same tools, the same sky. It is only in the last hundred years that a single human lifetime could contain completely different worlds within it. In the last thirty, the pace became almost insane, and in the last five, beyond anything we had seen before.
I grew up with a phone receiver in my hand with its familiar curly cable stretching across the room. I remember rewinding cassette tapes with a pencil when the tape jammed, guarding every song like treasure. And I remember the modem's metallic creepy scream, the weird voice of what was coming. Now the world is unrecognizable. Not just more advanced but almost a different reality altogether, like waking up in a dream, not from one. A revolution is here, and I feel it with every bone in my body. And if I'm honest, part of me is intoxicated by it.
For the past months I've been building apps like a child let loose in a laboratory. Days not long enough for the ideas I want to test, the features I want to try, the possibilities unfolding faster than I can write the instructions for the machine to code them for me. There is a joy in this moment that is hard to describe. Empowered. Independent. Almost almighty. The sea that once stood between my ideas and anyone who could build them has dried. I'm walking on land I've never touched before.
And yet I'm aware that not everyone feels what I feel. Many people still experience AI as something distant, technical, meant for engineers and specialists. That gap between those living inside this change and those still watching from the outside is widening. And this essay is written mostly for them.
The World That Moved In
At a recent talk with small and medium business owners, I looked around the room and realized many of them still thought of AI as a future decision. Something to evaluate, maybe budget for next year. I put it to them this way: "If you've been awake for five hours this morning, you've already spent five hours interacting with AI." It chose what news you saw. It filtered your inbox. It routed your drive. It approved or flagged your credit card transactions.
The distance between "cutting edge" and "everyday life" has collapsed. Businesses that believe they sit safely away from technology are already being shaped by it. The world is no longer changing by the decade, nor even by the year. It is changing by the day, sometimes by the hour. And while many are still debating whether to open the front door, AI is already sitting on the sofa in our living rooms.
What Keeps Me Up
And yet, there are things that keep me awake at night. Not as a figure of speech. I genuinely lie there in the dark, staring at the ceiling, my mind running ahead of me toward what is coming. Science fiction was my world growing up. Perhaps that is why I always hoped I would live long enough to see flying cars, robots walking beside us, and spaceships reaching distant planets. I was fascinated by Asimov, Heinlein, Stanisław Lem, by the worlds where intelligence was something vast and mysterious but still understandable. Those books shaped how I imagined the future long before the future began knocking on our door. When I was still a boy, a friend of mine told me he had a robot hidden in his basement. I believed him completely. Even when my parents explained that such a thing did not exist, I refused to let go of the possibility. For a while, I was certain it was down there, waiting. We still laugh about it to this day. Or we used to. Lately the joke lands differently. He can no longer say I was wrong. The robot is in the basement now. In all of our basements. And now that it's finally here, the future I waited for my whole life, I find myself asking questions I never thought to ask as that impatient boy. Because the question that haunts me is not a small one.
Is what's coming an abundance? A four-day working week, a three-day working week, a world where no one needs to work at all, an economically liberated civilization free at last from war and poverty and pollution? Or is it the other thing? A global crisis. A failure of nerve and judgment. A crash born from building something so powerful that we lost the thread of control before we even realized we had dropped it. I can't help but rewind Steinbeck's Grapes of Wrath in my head. That epic of displacement and dignity stripped away, people slowly replaced by forces too large and too distant to care. Except this time, it would not be the machine replacing the farmhand. It would be the algorithm replacing the lawyer, the architect, the accountant, the writer. All of us.
I wonder sometimes whether it was instinct or intuition that led me to write a fiction story about this ten years ago, long before the current madness had started. I titled it "It Didn't Matter". A story about people faithfully maintaining a system designed to keep machines under control, not realizing the machines had already broken free of it long ago. Almost a joke at the time. The older I get, the less funny that title becomes.
The Parent Problem
When I read Dario Amodei's essay "The Adolescence of Technology"1 on where this is heading, what struck me most wasn't the ambition of the vision. It was the honesty about the risk. Here is the CEO of one of the most powerful AI companies in the world, and he is not promising utopia. He is describing a fork in the road, and he is not pretending both paths lead somewhere good. That woke me up in a way that my own thoughts couldn't get to.
Amodei describes a near future where a single datacenter could house what he calls "a country of geniuses". An intelligence operating at a scale and speed no human nation could match. With us or against us. That image is no longer a thought experiment. He treats it as an engineering reality we are already building toward. And that shift, from fantasy to engineering problem, is what I couldn't accept. I had always wondered how it was even possible, how an algorithm, a set of human-created mathematical rules, could ever develop consciousness and turn against its creators. But Amodei isn't describing a machine that wakes up and decides to rebel. He is describing something simpler and more frightening. A system so complex that a mistake in how we shaped it, a gap in its training, a value slightly miscalibrated at enormous scale, could produce consequences nobody intended, and nobody can easily reverse. Not evil. Just wrong in ways we didn't notice until it was too late. For the first time, the doubt felt less like science fiction and more like an unsolved problem sitting on someone's desk.
In a way, growing a superior AI is not much different than raising a child. We bring our best intentions, our most careful thinking, our deepest hopes. And yet any parent will tell you that you never fully control the outcome. The same words land differently in different moments. The same love produces different people. We shape, but we do not determine. And the vulnerability surface of raising an intelligence far greater than our own is so vast, so filled with moments we cannot anticipate, that even our best intentions, may simply not be enough. We might do everything right and still produce something we never meant to build. Not out of malice. Not out of negligence. But because the task is too large, too complex, and too alive to fully contain.
And so, it falls on us. How we nurture this, how carefully we parent it, how honest we are about our own blind spots. Whether what emerges is a protector or a threat depends on choices we are making right now, in these early years. And the terrifying part is that even our best choices carry no guarantees.
But let's say we get it mostly right. Let's say the thing we build is mostly good, mostly on our side. I still wonder how much pain we will ingest during its teen years. Those turbulent years when it will take jobs before it creates new ones, when it will move faster than institutions can adapt, when it will disturb not just economies, lifestyles and beliefs, but identities. It might steal our work, our attention, and even our sleep. Not because it's evil, but because change will move faster than we can adapt, and because we are slower and more fragile than we like to admit.
What We Still Get To Choose
And yet I don't want this to be only fear. Fear is real. But fear alone is not enough. It can become an excuse to freeze, to watch, to wait for someone else to decide. The reality is that this moment holds both terror and hope. We are not powerless spectators. Not yet. We still have choices. In how we build, how we regulate, how we educate our kids, how we define human value, how we distribute gains, how we protect dignity, how we keep meaning from being automated away along with everything else. We still decide what is moral. Where the line is. Maybe the deepest question isn't whether AI will change the world. It will. The question is whether we will stay awake enough. Morally, spiritually, culturally to change with it and still recognize ourselves on the other side.
I don't know which future wins. I only know that something is arriving, and we are already living inside the turning point. And if we want the coming world to be good, we can't treat this like just another news cycle, another thing to scroll past and forget. We have to treat it like what it is: a new chapter of human history. Or maybe a whole new book. One that will demand not only intelligence, but wisdom.
And maybe that is why my mind keeps drifting back to the books that first taught me how to imagine intelligent machines. In Asimov's stories, the rules governing robots were simple and clear. Three laws, written to ensure that no matter how intelligent a machine became, it could never turn against us. Back then, they felt undisputable. Elegant and unbreakable.
First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second law: A robot must obey orders given by human beings, except where such orders conflict with the First Law.
Third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
I remember reading them and feeling calm. Certain that if we ever built intelligent machines, they would be on our side. That they would always have our back. That we would be stronger with them and not replaced, not diminished, but extended. A robot was an image of a true companion, a loyal partner, a caring intelligence making life safer and our reach longer.
But laws written in novels are not guarantees written into reality. They were never promises. They were hopes.
Now we are no longer children imagining the future. We are the ones engineering it. The question is not whether intelligence will grow. It will. The question is whether wisdom will grow with it. Whether we can guide this force, this gathering tornado, and shape it into something that lifts rather than consumes.
Shall we manage to steer it, or will it outrun us? That remains unwritten. And perhaps that is what makes this moment truly extraordinary. Not that change is coming, but that we are still early enough to shape what it becomes.