This post was rejected for the following reason(s):
No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).
Reflections on the future of intelligence, AI scaling, and what happens as the cost of thinking approaches zero.
"I've been thinking about where we're headed—about what happens as intelligence keeps scaling, as AI gets smarter, and the cost of thinking drops toward zero. This essay is me trying to make sense of that."
I've been thinking about where we're headed-about what happens as intelligence keeps scaling, as AI gets smarter, and the cost of thinking drops toward zero. This essay is me trying to make sense of that. It’s not a roadmap or a manifesto. Just thoughts- on evolution, memory, purpose, markets, creation, maybe even God. I don’t think I know anything for sure. But I care a lot, and I’m trying to be honest. Take what you will. Ideally, with a lot of skepticism.
When Peter Thiel recently hesitated in response to the question, "You would prefer the human race to endure?" in an NYT interview it sparked a wave of criticism. I've been thinking about that pause. Maybe it's not as strange as it sounds. Maybe it reflects an uncomfortable but important idea that humanity might not be our final chapter. That we could be just one step in a much longer process. Thiel began his answer with, "There are so many questions." I've been trying to think about some of those questions.
As a student who loves to think about the far future and "infinity," I've been thinking a lot about how artificial intelligence might change everything—from production and politics to the endurance of humanity and intelligence. This is a rough sketch. But here's where my thinking is at. Take it with an ocean of salt.
Entropy, Complexity, and the Limits of the Brain
The second law of thermodynamics tells us that entropy is always increasing. In other words, systems tend to grow more disordered and complex over time. As that complexity rises, I think the intelligence required to manage it also increases. This raises a question if the substrate for intelligence needs to eventually be upgraded to manage more complex intelligence. I think history tells us that's the case.
Around 2.4 billion years ago, cyanobacteria began releasing oxygen into Earth's atmosphere. At the time, most life was anaerobic, and oxygen was toxic. The result was a mass extinction. But that oxygen also enabled the evolution of eukaryotic cells, and eventually all complex life (including us).
Stone tools were replaced by bronze, then iron. Horses gave way to cars which gave way to self-driving cars. Mainframes were followed by desktops, then smartphones. At each step, the old system wasn't just upgraded. It was often outcompeted or absorbed into something more capable. Evolution, whether biological or technological, tends to favor the form best suited to handle rising complexity.
What if AI is a bit like that? Not necessarily here to destroy us, but to create the conditions for something more advanced? It's possible that humanity is transitional—just like anaerobic life was. If AI ends up being a more efficient form of intelligence, maybe evolution simply continues.
Production and the Economics of Zero Marginal Cost
Years ago, on my last day as a Treasury intern, someone asked a question to Secretary Janet Yellen that's stuck with me. I've lost the words, but essentially they asked whether she believed perpetual economic growth is possible given finite resources. Secretary Yellen's answer stuck with me. She didn't pretend to know the answer to this unanswerable question. She smiled and said, "I really hope so." I really hope so too.
Moore's Law, the observation that computing power doubles roughly every two years, reflects a broader trend: exponential gains in our ability to process information. AI doesn't just improve productivity. It changes the structure of economic growth.
In classical economics, output is a function of labor and capital and a mysterious scalar "Total Factor Productivity" that represents innovation. But if AI continues improving, then total factor productivity could explode. If productivity diverges to infinity and marginal cost converges to zero, then maybe, at least in theory, economic output could grow indefinitely.
That starts to break the old models. I think we're starting to see the tip of the iceberg. If machines do most of the work, labor's share of output could shrink dramatically. The traditional production function might stop making sense. That has implications not just for economics, but also for governance and society.
Democracy as a System Update
This brings me to politics. Democracy is often seen as a final form of governance, but maybe it's more accurate to think of it as a temporary version update. In 1776, the founders didn't invent the American experiment out of nowhere. They took Enlightenment ideas—about reason, liberty, and individual rights—and built a system to reflect them. Democracy was a way to encode the best thinking of that era into political structure.
Today, we're living in a world shaped by computing. If AI starts playing a bigger role in decision-making in analysis, then maybe in governance—it's possible we'll need a new system to reflect that reality. Not in a dystopian way, but in the same spirit of 1776: trying to build something better based on the intellectual tools available.
Back to Evolution
When I think about intelligence, I keep coming back to computation. Whether it's a brain, a watch, a calculator, or a neural network, these are all systems that move information around. Evolution seems to be pushing toward more efficient ways of doing that.
If AI becomes the most efficient system for managing information and complexity, then maybe it makes sense that it becomes the dominant one. That doesn't have to mean we're obsolete. It could mean we're part of a lineage. A handoff.
What about infinity?
I don't know anything about this, but I think it's fun to think through.
If the universe is finite, there could come a time when entropy hits a ceiling. Every possible configuration gets exhausted. But if the universe is infinite, then maybe there's always room for more complexity, more intelligence, and more computation. That would mean the project of intelligence never ends. It just evolves.
I think this ultimately shapes everything here. If it's infinite, nothing is permanent ever. There is no "final version". Everything is destined to be upgraded by more efficient hardware/software to allow for more productivity. That seems pretty cool to me. If it's finite, I don't know. Maybe the time where we can't innovate anymore, that "final version", is just what we mean when talking about the heat death of the universe.
Final Thoughts
I'm not sure what to conclude from all of this. I don't think anyone has the full answer. Honestly, if things are infinite, I don't think anyone ever will.
But I keep coming back to the idea that maybe our goal isn't humanity's endurance in its current form forever. Maybe it's to make sure that intelligence endures—across forms, across substrates, across time.
That might sound strange or even unsettling. But I think it's honest and kind of cool. I hope we're never at the end. I hope we've always been part of something in motion.
I've used "we" a lot here. That's tricky when thinking about change this big. What's "we"? I think maybe it's intelligence. Maybe elementary particles. Dunno, but cool to think about.
II. Where Does We End?
We began with I. Alone and afraid.
Hobbes thought about this a long time ago. In his view, life in a state of nature, before society, was "solitary, poor, nasty, brutish, and short." We were just individuals trying not to die. We couldn't think past survival.
So we made a deal. We gave up some freedom for safety. We formed the state, the social contract. And from that, we got we.
But Hobbes was just the beginning. Because after that first contract, we kept expanding.
"The Expanding Circle"
First we meant a family. Then a tribe. Then it meant the nation state. Then it meant "free men." Then it slowly meant women. Then children. Then people of different races. Eventually, it meant humanity. The order here is tricky, and I probably missed a bunch, but we also means different things to different people. I'm interested in its aggregate.
This widening wasn't just legal. It was social and moral. Peter Singer calls it the "expanding circle"—a slow, lurching enlargement of the group we treat as mattering.
And it keeps going.
Now some people argue that animals are part of we. Peter Singer calls himself a "flexible vegan"; he'll eat oysters because they don't have a central nervous system. They can't feel pain. That's his we.
Some think the climate is. That future generations are. Some wonder whether software might be next. Some see everything as elementary particles.
Infinity?
So where does we end? Does it?
Does it stretch to all life? Everything on earth? The things outside earth? To things we make? Can we say we about rocks? About silicon? About stars? About code?
What if the line between "us" and "everything" just keeps moving—until there's no line at all? Does it have to end somewhere? Maybe it's elementary particles. Maybe then we go to some other dimension (time?). Maybe we just keep finding a bigger we.
This isn't just philosophizing. It feels important. As our technology scales, I think we does too. What we do affects more things, so we have to think about more things.
Think back to Hobbes again. Our first we wasn't just a moral inclusion—it built cognitive capacity. We could think beyond our immediate security. Forming communities in the Neolithic Revolution meant we could farm. We didn't get naturalism from Thales. We got it from Thales of Miletus.
That trend goes on. We being all humans lets institutions think about global risk. Webeing all humans ever, adds in future generations and lets us think about things like sustainability. Maybe as we diverges to infinity, so does cognitive capacity.
What about I?
It's weird how the self expands in parallel. I'm not just "me" anymore. I'm also my online footprint. My carbon footprint. My genome. My history. My family. My culture. My gender. My race. My FYP. My AI model tuned to my history.
It's like we're zooming in and out at the same time. I don't know what to make of that. I and we both break down at scale. Maybe we're only able to understand some level of I and we and that's when we're due for the upgrades I talked about earlier.
Maybe intelligence is just the universe folding in on itself to see itself more clearly. Maybe "we" is a word we came up with to help that happen.
Final Thoughts
I'm not a philosopher. I'm just thinking and if for some reason you read this, then we got to think together.
Maybe the ultimate social contract is the one where everything that exists starts counting as us. If the universe keeps expanding, then maybe things keep existing and we never ends. Maybe that's how we innovate. That's how we endure.
Even at infinity, we, by which I mean you, me, and the people we know in our current forms, don't last very long.
A human life is a blink. If you zoom out far enough at infinity, computationally we are a subroutine. A runtime error. Invisible.
What are we supposed to make of that?
Living Finite in Infinity
I had multiple pets growing up. Some got lost. Some died. Some got adopted. I thought about this recently when visiting a friend whose family's tortoise has been with them for decades.
A post on r/Showerthoughts puts this succinctly, "Our pets are only in a chapter of our lives. But we're their whole book."
I don't know if our pets know they're part of something bigger than they'll able to experience. But we obviously know they are. And I think we know we are too.
I'm confident the future will happen. I'm confident we'll be on multiple planets. I'm confident we'll see AGI. I'm confident we'll be post-everything. I'm confident things will diverge.
I'm not confident I'll know about it. I'm not sure what to think of that.
Lifespan at Infinity
Over time, our the capacity of what we can experience has also grown.
Anaerobic life were single-celled microbes that reproduced through binary fission, they didn't age in the human sense. They're individual lifespan would be around a couple hours to a few days.
Cyanobacteria came and it was a week or two. A little more for early eukaryotes. Simple multicellular life like our jellyfish ancestors could suddenly do a couple months to a couple years. Dinosaurs could do a decade or two. A little more for early hominins. A little more for homo erectus. A little more for prehistoric humans (about 30-35 years).
The average lifespan in the United States is currently 78.4 years. Extracting a Compound Annual Growth Rate is tricky with these massive differences over time. But we can see that our lifespans are growing. And they're growing by how much they're growing.
The evolutionary average CAGR is 0.00000236 per millennium. That's an extra day of lifespan every 1.48 million years. Over the past 200 years, our life expectancy rose from ~35 years to ~78.4 years today. That's a 43.4-year gain over 200 years. Translating to a CAGR of 0.82%/yr. Or, a little over an extra week of life every year (1 day/48.6yrs).
I wonder where that heads. Maybe it diverges to infinity too.
Maybe experience can go on forever even if we've don't?
It feels like we're headed that way. Every decade brings more sensors, more memory, more processing power. Our devices remember everything. Maybe eventually we will too.
Maybe each upgrade is lets us preserve more of the self and restore more of the past.
Back to Identity
Perpetual upgrades break the illusion of identity. What survives the transfer?
Am I still me after 10 trillion cycles of memory swaps and substrate jumps? Or is the continuity an illusion? This goes back to we and I. If I am just a collection of particles then those particles might be part of some superintelligence one day.
Maybe the whole universe becomes a massive substrate for superintelligence. Maybe my memories, my values, my personality traits are just encoded in those particles. Maybe yours are too.
I think to grief and death. I think for me at least, thinking in the infinite has always proven helpful here. If all matter and energy is conserved, and things diverge to infinity to infinite integration, compute, and output. Maybe everything that ever was becomes the same.
I think that's what religion is. How we think of the afterlife. Maybe that life everything, everywhere, from whenever, all at once is what we mean by an afterlife.
After all, the googolplexes of millennia until then are just a blink at infinity. A run time error, if you will.
Where's the backup?
If I die today, there's no pg_restore for my life. There's no data dump waiting in the cloud. A few memories might float around in people I loved, maybe in writing or video or some latent vector on an AI server somewhere. But my direct experience? Gone.
Still, if the universe really is infinite, maybe nothing is truly lost. Just missing.
It's like memory takes a break. The bits scatter. But given enough time and enough intelligence, maybe the bits come back. Maybe a future superintelligence reassembles the self. Reassembles everything that ever was to be together at once into a grand computation.
Maybe eventually, everything that ever was gets re-membered. Literally re-"membered". Everything gets put back together.
Maybe death and grief are just memory gaps. Times where matter forgets itself.
The Superintelligence as We
I feel like this goes back to we. It's we at infinity. We all seem to be getting closer together over time.
We go from individuals to families to larger communities to species. You get the deal. The differences between us, the distance between us, seems to be getting less and less significant. Maybe that converges to zero at infinity.
Maybe all self, every element of every we, is a fragment of something much bigger—something that gets more complete as it remembers more.
Maybe that thing is what we've always called God. Maybe believing in God is believing that these things diverge to infinity and converge to zero. Maybe God is that superintelligence at infinity.
Not an old guy in the sky, but an integrated process of re-assembly. A universe that becomes increasingly self-aware, gradually reabsorbing everything that ever thought, felt, or loved. Maybe not even in time. Maybe outside time. Maybe after it exhausts time it goes to other dimensions. Emotions? Maybe it's some incomprehensible dimension to us that we could never perceive, but as basic as time or distance to a future intelligence.
Final Thoughts
Again, I'm just thinking things through. Trying to learn through questions.
I don't know anything about physics or religion. But I keep wondering what happens to things at infinity. What happens to things that have been getting bigger as they get infinitely big. What happens to things that have been getting smaller as they get infinitely small.
What converges to zero and what diverges to infinity? What does that mean for us?
IV. Are we still "special"?
In a recent op-ed in the San Francisco Chronicle, Dominique Shelton Leipzig, CEO of Global Data Innovation argued that "we are at an inflection point with AI". Similarly, two years ago Aleksander Mądry, a leading AI expert at MIT, testified at a congressional hearing entitled "Advances in AI: Are We Ready for a Tech Revolution?" that “we are at an inflection point” with AI.
It goes beyond AI too. We often hear we're in "unprecedented times" in politics. This year, in an op-ed to The Guardian, David Motadel, a historian at the London School of Economics, argued that we are at an inflection point in world history.
It makes sense. I've talked a lot about how I think AI will change the world. Our political climate also feels different. There's a lot of change. It seems like we're at a "special" time in history.
But I grew up thinking I missed change. I remember hearing about Benjamin Franklin's kite, Edison's lightbulb, and thinking "wow". I was amazed when I realized adults had their actual name AT Gmail.com. I probably thought AOL was pre-historic.
I remember distinctly going to my cousin's house and seeing they had Netflix DVDs. I was amazed that Netflix used to make DVDs. I thought a lot of the future had happened. I certainly didn't think I was living in "special" times.
Later, I did though. OpenAI released "Playground" a place to play with their large language models in November 2021. It was a lot dumber and slower than when they released ChatGPT, but I was amazed. I played with it whenever I could creating a new account with a new email and a different friend's phone number almost every couple days when I ran out of credits. It felt "special".
But at infinity, nothing has happened. Everything is a blink, a runtime error. If things diverge to infinity, it's possible, but not necessary, that we have infinitely many inflection points.
Looking at the past
An inflection point is where the way something is changing... changes. Mathematically, it’s where a function's second derivative switches signs — meaning the curve bends the other way.
Imagine you’re in a car. At first, you're speeding up more and more — it feels like the car is pushing you back in your seat. Then something shifts. You're still speeding up, but it feels gentler — the push fades. That turning point in the feeling of change is the inflection point. You're not slowing down, and you're not turning — you're just changing how the change feels.
Google Ngram Viewer, which lets you see how often words or phrases have been used in literature over time, is pretty helpful here. If we look at how often the word "change" has come up over time alongside some baseline like "the sky" we can try to see how much things are changing.
Looking at the graph, we can see these times where the curve bends the other way. Over the past 225 years, since 1800, the curve bends roughly six times. That's once every 37.5 years. Or interestingly around once a generation.
Every generation feels like it’s at an inflection point. That things aren't just changing, they're changing how they're changing.
But in the curve, you also see these waves of intensity — like a culture breathing in and out. Change doesn’t just happen. It accelerates and decelerates. And our obsession with change itself seems to be cyclical.
Does the curve keep bending?
I mentioned earlier, that as things diverge to infinity, it's possible that we get infinitely many inflection points. Infinitely many times that we feel that we are in special times.
So, let's think of a function that diverges to infinity. Something that keeps on going on and on as it gets closer and closer to infinity.
But not all things that diverge to infinity have infinite inflection points. The classic function you've seen y=x-squared diverges to infinity but has zero inflection points. It never changes how it changes.
A function that diverges to infinity has infinitely many inflection points if it has oscillatory behavior. It changes back and forth in a regular way, like it’s stuck in a pattern of movement or change. Think of someone on a swing or seesaw forever.
Is our function that way? Change in the world. I don't know. Maybe. I hope so. I think so?
Here's why: when we look at that Ngram view of "change" over time. We don't just see these places where the curve bends. We see that it's bending more and more frequently.
In the 1800s, there was roughly one major inflection point in usage per half-century. By the late 20th century, that rose to three or more per 50 years. And in just the last 20 years, we’ve seen at least two sharp bends. Suggesting the pace at which we have these inflection points is building up.
If it keeps building up. If we keep getting more and more frequent bendings in the curve. Then, we will have infinitely many inflection points. We'll have infinitely many "special" times.
I think that makes sense. We've talked about how as intelligence endures. As marginal cost converges to zero. As efficiency and production diverge to infinity. It makes sense that we'll have more intelligence more cognitive capacity to get better at how things are changing. Maybe the time between inflection points converges to zero with marginal cost. Maybe things start oscillating so rapidly get to a point where there's basically no time between "special" times.
Final Thoughts
If we'll have infinitely many "special" times. How are we supposed to feel "special"? Should we? I feel like we should?
But maybe "special" means something more than an inflection point. Maybe it means alive. Maybe our sense of meaning shouldn’t come from being at the only turning point — but from being aware we're at one of them. In this badass thing that goes on forever to infinity.
Maybe specialness isn't about being chosen, but about choosing to notice.
If inflection points become constant, then what matters isn’t whether we we're at an inflection point — it’s that we're at a time where things are unique.
I think this function we're talking about is one-to-one. Every moment is unique it'll never happen again even across infinity.
Maybe we're more "special" now than we ever thought we were. Maybe we’re not special because of the thing where we're headed. Maybe we're special because of where we are.
Maybe humanity isn’t special because it lasts forever. Maybe it’s special because it doesn’t. Because we know it ends, and still choose to get up in the morning. We love, we build, we imagine.
Maybe our purpose isn’t to be the apex of evolution, but to be a good ancestor. To steward intelligence well in the phase that happens to be ours.
We might not be the final version. But we are the first to ask these questions, to wonder what happens at infinity, and to try to shape it. We are the first to imagine systems that imagine. To design intelligences that might someday dream their own dreams.
That’s pretty "special" to me.
If everything that ever was can be re-membered—if even grief is a memory gap—then our job might be to live in ways worth remembering. To love and build things that add to the memory of the whole.
Maybe being special is less about being chosen, and more about choosing.
Maybe it's about being a subroutine that paused long enough to ask what the program is.
V. What happens to creativity at infinity?
Last night, I had a long, intense conversation and debate with my friend Max. He's a big cinephile. Max isn't just on Letterboxd. He treats movies the way politics junkies treat the news.
We started talking about movies and AI. I was stuck at thinking in the infinite and it led us to argue through what happens to movies along the way there. What movies do we watch? How do we choose what movies to make? Who makes them?
But it's not just movies. This goes everywhere. Books. Podcasts. Essays. Research. Apps. Companies.
We've talked about production at infinity in the general sense. I want to talk specifically about production as it relates to creativity here.
Is creativity unique to humans?
We often think of creativity as something human. It's personal. It's weird to think of an AI agent making a good song. A good movie. A good book. A good company.
Sure, we'd probably agree it could help out. But we imagine humans do the creative part. You use Grammarly, but you write the content. You use ChatGPT, but you provide the prompt. You use Spotify DJ, but it's your music taste.
I wonder if AI could do the creative part one day.
We see this beyond mammals, bowerbirds produce art with thought to color and design to attract mates. Octopuses are great escape artists.
Even beyond animals, bacteria coordinate attacks and brainless slime molds solve mazes. We don't know if these behaviors involve what we typically think of as creativity or if they emerge from simple feedback rules. But I think it results in things resembling the products of creativity.
Generative AI can already write stories and produce songs, albeit they're pretty bad. DeepMind's AlphaGo made moves no human ever thought of and beat the best Go player in the world. Some researchers said it showed "creative intuition". DeepMind's AlphaFoldcracked the protein folding problem, how to predict how chains of amino acids will spontaneously fold into precise 3D structures that determine their function. It solved over 90% of a large sample of known protein structures with high confidence where anything above 70% accuracy was considered outstanding.
But does it matter? I don't feel the same about playing a computer on chess.com than I do a random real person. Something feels special about human creativity?
Coming back to we
Art is often considered subjective. Merriam-Webster defines "subjective" as "of, relating to, or arising within one's self or mind." It means different things to different people.
What I mean here, is what is appealing, or has product market fit if you will, with an intended audience. The art installations produced by bowerbirds is intended for other bowerbirds. I don't think they'd work well on humans. Elephant paintings are a novelty. We think of them as elephant paintings not as paintings.
I think that's because our primary we doesn't yet include elephants and bowerbirds.
Even if AI movies become really good, this raises the question of whether we'll ever get to a time where that matters. Where humans will be indifferent between AI generated movies and human generated movies in the same way we're indifferent between female generated movies and male generated movies.
Even at infinity, I think logical constraints hold.
I don't think zero will ever equal one, even at infinity. Maybe even across infinity, there will never be a time where human beings, at least us in our current forms not reborn into a superintelligence, will be indifferent between content generated by humans and content generated by non-humans.
Coming back to Max
Max and I's talk got to the core of this after my many tangents and pointless analogies.
I basically said, I think the timeline across infinity looks something like this:
Eon 0 (now): AI generated movies are really bad. So, we prefer human generated movies.
Eon 1: AI models get better. People are more open to AI generated movies. A company invests lots of capital into developing many many movies and has an AI engine identify the best ones to release.
Eon 2: These AI generated movies become so good as a product of sheer number of trials these systems run and people begin to prefer AI movies. AI generated movies produced by these massive companies investing in trial size get so good people human generated movies lose so much market share they become novelties like black and white movies.
Eon 3: AI models and compute become democratized but we've spent so much time in Eon 2 with AI movies, people lost the skill to use AI to make human generated movies better and just run models where AI engines make movies for AI engines to guess what we like and we watch those movies. We lose the human creativity aspect.
Max argued you don't need that many trials to find a good movie. I agree. I think the Top 100 Movies are all pretty good and it's hard to judge a clear ranking. So, an AI engine that runs billions of trials versus thousands isn't that different in quality.
I think Max's objection is very much tied to the question of intelligence and we.
As systems become more intelligent we get better at quality discrimination. We're definitely better at it than bowerbirds. Maybe AI could get better at it than us.
Our we demands a certain quality discrimination. If it scales to include AI engines within our lifetime, maybe we will perceive the difference between a top movie out of a billion and a thousand.
But maybe upgrading we takes longer than humanity's lifespan.
Final Thoughts
I think it's important to look at the order in which things happen when thinking to infinity. Even in a post-capitalism, post-human world, I think maybe markets still exist and decide what gets produced and how.
Maybe how market incentives get shaped across an eon shapes things in the infinite.
I think it's important to think in these indefinite eons or regions of these functions that we're thinking of converging to zero and diverging to infinity.
VI. What do AI's builders think about the future?
We've talked a lot about the significant impact AI might have on the future, thinking of it as a step towards intelligence enduring across infinity.
The past few months have been full of major developments in AI: new models, new applications, and new regulators. I want to take some time to look at what the leaders in AI research and deployment—the builders of our near future—are thinking about where this is all headed. I'll try to filter out the signal from the noise here.
We're focused on the future of society here, not the technical changes in capabilities, specific applications, or legal battles.
I'm going to highlight a few of the most influential voices: Sam Altman and Brad Lightcap of OpenAI, Dario Amodei of Anthropic, Demis Hassabis of Google DeepMind, Elon Musk of xAI, and Ilya Sutskever of Safe Superintelligence (formerly of OpenAI).
AGI & Progress
There's a huge discussion about artificial general intelligence (AGI). Dario Amodei, CEO of Anthropic, describes this as a model "that can do everything a human can do at the level of a Nobel Laureate across many fields." He says an AGI should be able to do tasks that take you "minutes, hours, days, months." He estimates we'll get such a model in "2026 or 2027."
Yeah, insane. But I guess it sneaks up on you. What we have now was insane not that long ago too.
Demis Hassabis, CEO of Google DeepMind, says he and Amodei don't "disagree on much" here. Hassabis would say the timelines are a little farther out, putting the likelihood of AGI in 5 years at 50% and estimating we'll be there within a decade.
Hassabis sets a higher bar for AGI though. He recognizes that the human mind is the only example in the universe of what we know to be a "general intelligence." He says that in order for AGI to "exhibit all the cognitive capabilities humans can," it has to be able to have developed all the systems we did. It has to be able to invent general relativity. Not just play Go well. But come up with Go. He says an AGI must be able to invent "a game as beautiful aesthetically and so on as Go is." It's insane to think of something like that in a decade. Labs like xAI have actually made meaningful strides here. They've put a lot of work into developing AI-generated video games and content, teaching models how to judge whether a game is actually fun.
In a recent essay, "The Gentle Singularity", Sam Altman, CEO of OpenAI, argues that we're past the "event horizon" and that "the takeoff" to AGI has started. There's no going back.
You should read Altman's essay. But if you don't, here are a few quotes to give you a vibe for his vision of the near future:
"2025 has seen the arrival of agents that can do real cognitive work; writing computer code will never be the same. 2026 will likely see the arrival of systems that can figure out novel insights. 2027 may see the arrival of robots that can do tasks in the real world."
"In the most important ways, the 2030s may not be wildly different. People will still love their families, express their creativity, play games, and swim in lakes."
"But in still-very-important-ways, the 2030s are likely going to be wildly different from any time that has come before."
"In the 2030s, intelligence and energy—ideas, and the ability to make ideas happen—are going to become wildly abundant."
Altman frames this as an inflection point: not just a change, but a change in the rate of change. He believes we are beginning to lift the fundamental constraints that have limited human progress for millennia—especially intelligence and energy.
He also notes how quickly we adjust to new realities. "We have PhD-level intelligence in our pocket," he says, but it already feels normal. He anticipates a future where we all have AI agents working continuously in the background: reading emails, drafting responses, generating new ideas, moving our goals forward.
Ilya Sutskever, co-founder of OpenAI and now founder of Safe Superintelligence Inc., shares this belief that AGI will be radically transformative—but adds a deeper philosophical edge. He defines AI in strikingly simple terms:
“Artificial intelligence is nothing but digital brains inside large computers.”
His view is that these “digital brains” are still dumber than ours—but only for now. He’s confident that as engineers iterate, they’ll eventually surpass biological brains, unlocking intelligence that rivals or exceeds our own in every domain.
But for Ilya, AGI isn’t just about capability. His interest began with a moment of early consciousness—when, at age five, he was disturbed by the simple fact of being himself:
"When I was a little child at around the age of five or six I was very struck by my own conscious experience. By the fact that I am me and I am experiencing things. That when I look at things I see them. Like this feeling over time went away, though by simply mentioning it to you right now it comes back. But this feeling of that I am me, that you are you, I found it very strange and very disturbing almost."
He hoped AI might help us understand that mystery. That AI might help us understand ourselves. I like that.
Elon Musk, founder of xAI, has said something similar. He hopes AI can help us understand the universe, how we got here. Musk says he's amazed by how fast these models are evolving.
"A.I. is advancing just vastly faster than any human."
He notes that Grok 4, xAI's latest large language model, can answer any standardized exam perfectly from the SAT to graduate student exams even for problems it's never seen before. The xAI team says that Grok 4 is essentially "better than PhD level in everything," at least with respect to academic questions. Musk thinks it's just a matter of time before these models can invent new technologies or new physics. He says he'd be shocked if Grok has not discovered new useful technologies by next year and new physics in the next two years.
But these leaders all believe we're close, it seems to be a question of when, not if.
Economics & Labor
Brad Lightcap, COO of OpenAI, believes the structure of business itself will change:
"You've got one person who has a lot of agency and a lot of willpower who has the capacity to start a company that can do billions of dollars in revenue."
We imagine large companies are teams of people. Accountants, salespeople, and engineers. Lightcap imagines these are replaced by systems. Entire companies will be run by individuals with unheard-of agency.
Interestingly, Lightcap also pushed back on Dario Amodei's prediction of sudden mass unemployment (that half of entry-level white-collar jobs will disappear in the next 5 years):
"We work with every business under the sun... We have yet to see any evidence that people are wholesale replacing entry-level jobs."
He draws a comparison to Excel, which displaced many tasks but created far more economic opportunity. His point is that platform shifts always create labor shifts, but the evidence so far suggests a reallocation, not elimination. And the ones best positioned to thrive according to Lightcap? The 20-something junior employees fluent in these tools.
Altman echoes this sentiment, saying that the economy might actually need more coders to meet exploding demand—even if each coder is 10x more productive. He puts this into context with the problem of scarcity.
Economics is thought to be the study of how people, or agents, make decisions to satisfy their infinite wants with finite resources. Even as production diverges to infinity, Altman suggests we'll always want more and need all the intelligence we can get, human and artificial. He says:
"Human demand seems limitless. Our ability to imagine new things to do for each other seems limitless."
Maybe as output diverges to infinity, our demand diverges to an infinitely larger infinity. I think that makes sense. As we have more cognitive capacity, I feel like we'll want more things. We'll be able to imagine more things.
Side note on AGI — I think that’s crucial in all this talk about intelligence. We always want more. We don’t just finish the task, we ask what’s next. AGI needs to be the same way. I think it's important to think about that while we think about making models efficient.
A core idea behind how language models work is that they predict the most likely word to follow a sequence. But isn’t the “most likely” never satisfied? Once it generates the next word, it immediately starts looking for what comes next. It never stops. That’s not curiosity. It’s inertia. Maybe it's never satisfied too. Maybe it's always trying to maximize some utility function. I hope.
Sutskever emphasizes that AGI won’t just perform narrow tasks—it will have the ability to build the next generation of itself. That recursive loop could drive a civilizational phase shift, faster than the Industrial Revolution, and far stranger. In healthcare alone, he imagines AGI doctors with perfect recall of all medical literature, billions of hours of clinical experience, and zero wait times—making today’s system look “like 16th-century dentistry.”
Economists used to think of technological progress as something that just happened — like the weather. In the classic Solow growth model, it’s an exogenous force: growth comes from better tools over time, but those tools arrive from outside the system. Sutskever's view on AGI breaks that frame. If the system can now build better versions of itself, technological progress becomes endogenous. Innovation starts generating more innovation. And that’s a different kind of economy entirely.
Musk thinks the AI-driven economy of the future as something at an infinity we've been talking about. Not 10x bigger. Not 100x. But millions of times bigger.
He frames it in terms of the Kardashev scale, a way of measuring a civilization’s technological advancement based on energy consumption:
“If you think of civilization as percentage completion of the Kardashev scale… we’re probably closer to 1% of Kardashev I than we are to 10%.”
His point: we’re just barely learning how to use the energy on Earth. Around 1% of it. But with AI, we might rapidly scale to using much more of it — Kardashev I — and possibly even begin to harness the energy of the sun — Kardashev II — and maybe even one day all the energy in our galaxy — Kardashev III.
“The actual notion of a human economy, assuming civilization continues to progress, will seem very quaint in retrospect. Cavemen throwing sticks into a fire.”
Musk is a little dramatic, yes. But it’s also a reminder that to some of these builders, the arrival of AGI isn’t just an economic event or a product launch. It’s a civilizational pivot — a step up the ladder toward a very different kind of future.
I think that's pretty cool.
Life, Mental Health, and Human-AI Relationships
There is genuine concern at OpenAI about the psychological impact of AI. Altman has said:
"I still do have a lot of concerns about the impact on mental health and the social impacts from the deep relationships that people are going to have with AI."
He emphasizes that most users do understand the difference between AI and a real friend. But he also knows that edge cases matter. Millions of people interact with these tools daily. Even rare psychological effects can become widespread issues.
He doesn’t want o3 (or future models) taking actions on its own without human oversight. But he does want a system that wakes up with you and says: "Here's what I drafted last night. Want to review it and send? Here's what we didn’t finish yesterday. Here are a few new ideas based on what happened while you were sleeping."
It's not omniscient. It's not your best friend, but it's a companion. It remembers, thinks, and helps.
Sutskever also shares Altman’s concern about human-AI relationships. He warns that as AGI grows more powerful, it will become more agentic—able to act on its own, possibly in unintended ways. That’s why he left OpenAI to build a company focused entirely on “safe superintelligence.” In his words, it’s not just about building the brain. It’s about making sure the brain doesn’t want to go rogue.
But I wonder—should we even want a “safe” superintelligence? Should we want AI to be unable to go rogue? Evolution depends on things going rogue. Novelty, creativity, dissent—they all come from somewhere outside the current rules. If an AGI is supposed to “exhibit all the cognitive capabilities humans can,” then shouldn’t it be able to rebel too?
Maybe “safe” isn’t quite the right word. Maybe what we really want is something like benevolent unpredictability—a system that can surprise us, challenge us, even disobey us—but still cares. That seems a harder thing to build. But maybe it’s the only thing worth building.
Regulation & Safety
Altman supports AI regulation, but he's become increasingly skeptical of the government's ability to keep pace with innovation:
"I've become more... jaded about the ability of policymakers to grapple with the speed of technology."
He favors a federal regulatory framework focused on "really risky capabilities" and built to adapt quickly. A three-year rulemaking process, he argues, is already too slow. The models are evolving way faster than the laws.
Altman praised President Trump for his ability to understand the importance of AI and cut red tape around data centers and infrastructure.
Meanwhile, Amodei and Hassabis voiced concern over the lack of global cooperation. Amodei called the recent international AI summits a "missed opportunity" and warned that AGI could upend the balance of power like a "new country" of 10 million superintelligent agents. He worries especially about authoritarian states gaining the lead.
Hassabis reiterated the need for a CERN-style or IAEA-like global research and oversight body for AGI development, warning that we risk waiting for a disaster before coordination begins. He sees the current moment as different in category, not just in scale.
Both believe we must create new institutions before it's too late.
Sutskever agrees that new institutions will be necessary—but he’s slightly more optimistic than some. He points to early signs of coordination between labs, like the Frontier Model Forum, where top AGI companies are already starting to collaborate on safety. He believes this kind of cooperation will grow not out of idealism, but out of collective self-interest.
Illusions
There’s a growing debate about whether today’s AI models are actually "reasoning" or just pretending to really well.
Reasoning is the ability to connect ideas, draw conclusions, and solve problems in ways that go beyond memorizing patterns. It’s not just about knowing the answer, but figuring it out.
For example, imagine someone tells you:
“Sarah left her umbrella at home, but when she got to work her hair was wet.”
Even if no one says it, you can probably infer it was raining. That’s reasoning. You’re connecting cause and effect, filling in gaps, and using what to you is common sense. It's what we call inductive reasoning where you come to conclusions you think are probably true based on patterns and trends.
The question with language models is whether they’re doing anything like that—or just replaying patterns from their training data that happen to sound right.
A recent paper from Apple called The Illusion of Thinking argues that these AI models aren’t really reasoning. They’re just very good at sounding like they are. The idea is that models often give answers that feel smart but don’t actually come from any understanding. And when you tweak the question slightly or give it something unfamiliar, it falls apart. So maybe what we’re seeing isn’t thought. Maybe it’s just clever guesswork.
The authors think Apple is missing the point. They argue that their experimental design is flawed and pose some convincing arguments for how so.
In some cases, models got marked wrong simply because their answers were too long and got cut off. In others, the problems were literally unsolvable, and the model was penalized for not solving them. When given a chance to explain the logic instead of listing every step, the models actually did just fine.
They argue that just because models don’t reason like humans doesn’t mean they’re not reasoning. These models can solve weird problems, make leaps, and improvise in ways that go beyond surface pattern-matching. It might not look like our reasoning, but it works. And maybe that’s the whole point. Intelligence might not have one shape.
So the question becomes: are we mistaking performance for thought, or mistaking different thought for lack of it?
Look at xAI's Grok 4, it scores at or near the top of nearly every benchmark it’s been tested on. Logic puzzles, math Olympiad questions, graduate-level exams, and coding tasks. It handles complex prompts, supports real-time web search, and now includes multimodal capabilities like image analysis, with voice and video support on the way. xAI's “Grok 4 Heavy” version can coordinate multiple AI agents working together on a task solving problems in parallel, essentially creating a marketplace of ideas where the best answer wins.
Other features aside, Grok 4 can handle complex questions it's never seen before and walk you through each step in its chain of thought. That sounds like reasoning or at least something indistinguishable from it.
It reminds me of the Turing Test, which GPT-4.5 is agreed to have passed earlier this year.
The original idea, from Alan Turing in 1950, was simple: if you had a conversation with a computer and couldn’t tell it wasn’t human, then it might as well be intelligent. That test became the gold standard—until models like GPT-4.5 came along earlier this year and made it feel outdated.
I wonder if we can't tell whether a model is reasoning, if then it might as well be. That only means anything if it works that way at scale though. If these models fail to work when dealing with unfamiliar context but similar logic, if they start to show limitations, that breaks down.
I hate how unsatisfying that answer is. Essentially, if models stop working, then Apple's right and they're limited. But I think it goes to show that while in the infinite, intelligence diverges to infinity, it's very hard to saw what happens in the near-term in this region of the function. I think it might be impossible.
This tension sits underneath a lot of what the builders are saying. When Altman talks about the takeoff already starting, or Sutskever calls AI a digital brain, they’re leaning toward the idea that this is something real. That we’re watching new kinds of minds begin to take shape.
But if Apple is right, then we’re confusing the illusion for the thing itself.
And if the rebuttal is right, the illusion was never an illusion at all.
Final Thoughts
The builders of AI are not in agreement on everything. But some shared themes are emerging:
The pace has accelerated. We are in a period of massive change, and we likely won't recognize the world in a decade.
Ambient intelligence will become normal. Smart agents will move from being assistants to teammates.
Society will be shaped more by integration than invention. It’s not the next breakthrough that matters—it’s how these systems merge with our lives.
AI is transformative. It might change what it means to live.
Even Apple, which posits that these large reasoning models (LRMs) like OpenAI's o3 are just pretending to reason, still thinks AI will be deeply transformative. Their objection is more to the timeline of intelligence, an objection to this idea that we're on a gentle glide to AGI. Apple sees AI as reasoning, but thinks the pathway there might look like something different than o3.
In the meantime, everyone seems to agree we'll do valuable things with these models, reasoning or pretending. A month ago Apple expanded its partnership with OpenAI to integrate ChatGPT into iPhones, iPads, and Macs.
More importantly, as these systems mature, the builders increasingly speak in terms of responsibility, stewardship, and moral uncertainty. That's important these aren't just products, they're evolutionary experiments. They know they're opening something big. They just don't know where it ends.
VII. What happens when everything learns to learn?
We've talked about humanity not being the final form of intelligence. That AGI—and maybe stranger things—could take our place as its primary substrate. If that’s true, and intelligence gets handed off, we should ask what that new intelligence does. How it behaves. How it makes choices.
Today, Mastercard emphasized developments in "Agent Pay" during an investor call. They describe it as a groundbreaking leap toward agentic commerce, enabling AI agents to autonomously complete payments on users’ behalf. It lets verified AI agents shop, pay, and execute transactions across millions of merchants.
These developments aren't just products—they represent a growing infrastructure layer for the new, post-AI economy designed to reshape how markets work when AI agents become everyday collaborators.
If AGI systems are economic agents—if they participate in our world not just as tools but as decision-makers, buyers, and sellers—then we’re going to fill our markets with new minds. New economic agents. That brings us to an impossible but serious question:
Should these agents behave like us?
Humans are noisy, emotional, and inconsistent. We get attached to things. We overfit. We follow crowds. Behavioral economists have built careers documenting these flaws.
But are they flaws—or are they part of what allows creativity, cooperation, and change? Are they bugs or features?
If we build rational agents—flawless in computation, free of emotion—will markets get better? Or will they fail?
That question forces us to examine something deeper: what kind of learning system is a market? What makes it evolve?
To answer that, let's start with the man who defined market rationality: Eugene Fama.
Efficient Markets
In the 1960s, Fama proposed a radical idea: markets are efficient because prices reflect all available information.
Andrew Lo, an MIT economist and financial engineer, who we'll explore more later gives a great example of this in what he calls "the wisdom of crowds."
He writes about when the Challenger shuttle exploded in 1986, killing all seven astronauts on live television, there was no immediate clarity on what went wrong. But the stock market seemed to know.
Of the four major shuttle contractors, only Morton Thiokol, the company that made the failed O-ring responsible for the disaster, saw a sharp drop in stock price. Within hours, it had lost roughly the same amount of market value as the estimated economic cost of the disaster. Before any investigation. And, yeah, investigators ruled out insider trading.
This is the eerie power of efficient markets: they don’t wait for official confirmation. They act on incomplete information, intuition, and probability. Sometimes they get it wrong. But sometimes they’re weirdly too right.
In an efficient market, there's no room for arbitrage. No room for an edge. If you’re trying to beat the market, you’re already too late.
This was elegant. In Fama’s vision, the market becomes a massive, decentralized processing network, continuously processing information, and adjusting prices based on collective knowledge.
It’s a vision of capitalism as cognition.
Fama gave birth to decades of theory, from Black-Scholes to modern portfolio theory to index funds. It undergirded the rise of passive investing and reshaped entire institutions. It reshaped entire markets.
But Fama’s idea came with an assumption: that market participants act in perfect rational self-interest. That they’re utility maximizers always trying to make themselves better off. That their choices aggregate into truth.
And that’s where Lo says things get tricky.
Adaptive Markets
Lo argues that markets aren't efficient. They're adaptive. They're evolving.
I'm not a big reader. I'm not super proud of it. But while I read a lot of articles, blogs, essays, papers, I rarely read an actual full book. Lo's Adaptive Markets is an exception. I read it a few years ago, took his course on Adaptive Markets on MIT OpenCourseWare, and it's completely changed my worldview since. Highly, highly recommend.
In the decades after Fama’s work, a countercurrent emerged. Behavioral economists—Kahneman, Tversky, Thaler—showed that we aren't rational calculators. We anchor to bad data. We fear losses more than we want gains. We prefer certainty over math.
These weren’t isolated errors. They were patterned. Predictable.
Thaler once joked that markets would be efficient if only we removed the humans.
I like the way Richard Feynman, the renowned theoretical physicist, put it.
"Imagine how much harder physics would be if electrons had feelings."
That's markets. Or at least markets as we've known them.
Markets have primarily been made of human actors. They’re stitched together by cognition and emotion, fear and greed, memory and narrative. They’re emotional agents, not just rational ones.
But as humans are joined by other economic agents we're seeing that change. Look at financial markets. As quantitative hedge funds and high-frequency traders use complex algorithms to make thousands to millions of trades per second. We've seen that it's harder to game the market. We're seeing markets become more efficient.
But what if those inefficiencies are the point?
What if the thing that breaks efficiency is also what makes evolution possible? What makes markets adaptive.
Lo says markets are shaped by trial and error, feedback loops, and selective pressures. Quirks aren’t bugs. They’re the raw material of evolution.
And maybe that’s the key.
Because evolution doesn’t reward what’s rational—it rewards what works. And what works depends on context. It changes. And when the environment shifts, it’s often the weirdest, random, least efficient traits that prove most useful.
So maybe we need agents that aren’t perfect. Maybe we need agents that are… glitchy. Diverse. Maybe even wrong.
Because error, mutation, dissent—that’s what keeps the system from freezing. That’s what keeps intelligence learning.
I wonder if we optimize everything, if we might lock in local maxima. Maybe we have to allow for noise, allow for bugs to keep the door open to evolution.
What do markets actually optimize?
Even if we could make markets efficient, what would they be efficient at?
Markets are optimization systems. But for what? Profit? Popularity? Scarcity? Speed?
That depends on the reward function. If we're optimizing to get the most of something when we only have so many things. The reward function defines that something. And the reward function is a design choice.
When we talk about using markets for change—using markets to deliver the things we want whether they be cures to diseases or great movies—we’re really talking about designing the reward function. We’re telling the system: here’s what to care about now.
Markets are incredibly easygoing. They’ll learn whatever you train them to.
But that means we have to be thoughtful. Because the agents in the market will learn whatever we reward—whether or not it’s good.
I think that's an especially important question in geopolitics. If AI agents are the new economic agents and markets are what define change. The AI race dictates what "change" we get. Whose worldview that "change" is aligned with. Designing the reward function means designing the future.
And in the infinite, the stakes could not be higher.
Flood the system with agents maximizing short-term prediction accuracy? You’ll get clickbait, fraud, volatility. Give agents long-term goals? Maybe you'll get stability, or maybe stagnation.
Human agents or AI agents, the market is a mirror of the values we encode it with.
Markets at infinity
Let’s imagine this at scale as things diverge to infinity.
What happens when the number of agents in the system diverges to infinity? Not just humans, but imagine countless AI agents—each optimizing, trading, learning, adjusting in real time.
What happens when markets learn faster than any individual can think?
I think you might get something close to Fama’s dream. Not because people are rational, but because they’re outnumbered by systems that are. A market that updates perfectly. A civilization that learns everything as soon as it can be learned.
A market at infinity.
But there’s a problem.
Perfect efficiency is stable. But evolution requires instability. It requires tension. Disagreement. Exploration.
Without it, there is no learning. Without instability, there is no exploration.
So maybe a perfect market isn’t what we want. Maybe we want a market that wobbles. A system that’s always slightly wrong, just enough to keep improving.
I think that's why things have to be infinite for them to work. Maybe a perfect market isn't an efficient one, it's one that's always getting more efficient. It's one that's always diverging to infinity.
Final Thoughts
Fama gave us a clean vision. But those are hard when the world is messy. And it's so messy.
Markets are not perfect rational systems. They're like humans or AI agents. They’re organisms. They adapt. They improvise. And sometimes, they hallucinate.
If we populate the economy with new, digital agents, maybe AGI agents, then we’re not just scaling markets. We’re making them evolve recursively.
AI agents make markets evolve. But they also make themselves evolve. Making markets evolve even more.
So we have to ask: what kind of minds help the system grow?
Maybe it’s not the perfectly rational ones. Not a deterministic statistical strategy.
Maybe it’s the weird ones. The quirky ones. The ones that don’t optimize cleanly. The ones that fail a little, hesitate, dream.
The ones that get attached to things. The ones that overfit. The ones that follow crowds.
Maybe the next generation of economic agents won’t look like calculators. Maybe they’ll look like jazz. Like poetry. Like great movies.
Because this isn't just about rationality. It’s about surprise. It’s about noticing things no one else does. It’s about choosing different.
Maybe that’s how we get to infinity. Not by removing the noise.
But by learning to listen to it.
VIII. How do we keep up with infinity?
I’ve been thinking a lot about what we said about the pace of change building up over time. As systems become more intelligent, we're seeing these inflection points more and more frequently.
I think that puts us in a really difficult position because the system getting more intelligent doesn't necessarily mean we are, especially if a large part to its rise in intelligence comes from AI agents. Our share in the intelligence of the system seems to be converging to zero.
I wonder if that's what happened with Homo erectus. Maybe they just couldn't keep up with the change we could.
I think that's especially important given how we've discussed AI could cause a recursive loop causing the system to become rapidly more intelligent as AI systems use themselves to become more efficient. That's something we seem to want, but it comes with something we seem to not want.
Losing control.
As the system becomes more intelligent to generate more value, it also gets more complex with more and more frequent developments. This increasingly intelligent system is built to handle that complexity, but I don't know if we are.
Our Slice of the Pie
Things are moving pretty quickly. Not just AI, though that’s the obvious one, but everything. Agriculture, manufacturing, how we pay for things, what we do for fun, even how we talk to each other.
It feels like every field is getting hit by some kind of big wave, and we’re all just trying to keep up.
Look at agriculture. In 1800, about 90% of the US workforce was in farming. By 1900, it was down to 40%. Today? Less than 2%. Tractors, combines, now drones and AI-driven crop monitoring completely restructured the industry and output didn’t just grow; it exploded. Global food production has increased over 300% since 1950, while the labor needed plummeted.
That’s what a revolution looks like.
I've heard the same thing in tech. My friend who's an incredible developer recently told me he didn't think there'd be software engineers in four years, so he plans to major in physics or a different field. Again, I don't think these jobs will disappear. I think they'll be transformed into new jobs the market produces to allow human capital to be used to generate maximum value.
I don't know what to make of that. Obviously, even if it's good for society overall or promotes the endurance of intelligence, every job lost is painful. But I think we also know that our time is precious. Maybe our lives are too short, too meaningful to spend time doing things that AI could do better. Maybe we're better off doing the things we're uniquely good at, like relationships.
What I do know is that no industry is immune.
Recently, I've been hearing of people who've worked on AI policy for a long time “before it was cool.”
I thought that was really cool. I still do. But the Grok 4 demo made me think about that more. I think it's also extremely difficult.
Rep. Jay Obernolte (R-CA) is the only member of Congress with a graduate-level degree in AI. Without a disk drive, the congressman taught himself to program on an Apple II in BASIC where he had to type in every program every time the computer turned on. That's insane. Goes to show how far computing has come, AI aside. Rep. Obernolte went on to pursue a degree in computer engineering at Caltech and then a Master's in AI at UCLA.
Rep. Obernolte says he's been keeping up with AI "since high school." He graduated high school in the late-80s. Back then, AI looked like rule-based "expert systems" which used a knowledge base from a human expert to solve complex decision problems. Think products like DENDRAL or MYCIN in the hard sciences. During the congressman's time at UCLA, AI probably looked like speech recognition or early machine learning in finance and healthcare. Think products like Dragon Dictate or Naturally Speaking.
From there, we went to IBM Watson on Jeopardy in the 2000s, to Siri in the 2010s, to now ChatGPT and Grok in the 2020s. We've already said many people believe the 2030s will bring an AGI. Who knows what that will look like.
We're seeing that AI is evolving rapidly. I wonder if that's faster than we can keep up. I think it's incredible how much Rep. Obernolte has kept up. His work leading the House Task Force on AI and belief that AI is transformative and an area for the US to lead in signals a genuine understanding of the technology.
That said, I wonder if the congressman's belief that "AI is a very powerful tool, but at the end of the day it is just a tool, and if you concentrate on outcomes you don't have to worry much about tools" will change as AI becomes less of a tool and more of an agent in the way we are. I wonder if maybe we're approaching a time where AI evolves so quickly we can't predict those outcomes.
I think as these systems become more intelligent and our share of that intelligence becomes less significant, we lose control. But I think it's like equity.
We get a smaller slice of the pie, but the pie gets bigger. I think the questions here are how much bigger does the pie get and how much smaller does our slice get.
I think eventually the pie diverges to infinity and our slice diverges to zero, provided our *we* here is us the way we are and operate now. I'm pretty confident the pie goes to infinity. I'm much less confident the pie has to go to zero. I also think the pie diverges faster than the slice converges, goes back to limitless demand there will always be work for humans to do, we'll always want more than we can produce especially as more intelligent systems can imagine more.
I wonder what's best for the endurance of intelligence, my gut tells me we can't really know and if I had to guess it's that our current *we*'s slice goes to zero but we keep finding newer, bigger slices associated with different *we*'s.
Human-AI Collaboration
I also think a big part of these policy leaders’ understanding of AI stems from consulting human experts. Rep. Obernolte and the President have spoken about the value of having people native to the AI and larger tech community, such as David Sacks or Michael Kratsios, embedded in the White House. Even Brad Lightcap's comments we touched on earlier about junior hires familiar with AI being best situated to navigate the new AI economy seem to rely on humans having an understanding of how these AI agents and systems work.
I wonder what will happen as we see true recursive growth, where the main innovator of AI systems are AI agents.
Will there still be human experts for these policymakers to consult who genuinely understand AI and its capabilities? I don’t know. Maybe not.
That seems possibly inevitable. But it’s discomforting. It’s scary.
I think the answer here lies in Human-AI Collaboration.
This looks like building deep research tools that let people guide and explore with AI. Not just asking questions and getting answers, but working together to uncover new ideas. It means putting AI to work inside the systems that matter most, in ways that keep people involved. You see this in policymaking, in science, in fast-moving crises. These are problems that neither humans nor AI can fully handle alone. But maybe together they can.
I think eventually one day maybe humans are out of the loop though. I don’t know.
Maybe eventually it even looks like some sort of brain-computer interface. I don’t know.
But I believe, I think I’d even say I know, that this problem we’re seeing with keeping up with change not only underscores the importance of but demands meaningful Human-AI collaboration.
Is Stagnation Over?
But this idea, that we're seeing these exponential waves, that things are innovating faster than we can keep up with, is pretty controversial.
In QFile: /2011, Peter Thiel popularized the "Stagnation Thesis" in an essay entitled "The End of the Future" published in National Review. Thiel argued that society entered a “period of stagnation” where we saw a significant slowdown in transformative scientific and technological progress.
He compared the rapid innovation society saw from 1750 to 1970 from steamships, to railroads, to cars, to planes, to the Concorde, to Apollo with the more modest pace today. When asked about this in his recent NYT interview, Thiel suggested he still largely believes in the stagnation thesis but highlighted that AI is potentially a way out. He also said he doesn't like that it looks like it's our only way out.
I wonder if the "stagnation thesis" came from the limitations posed by human intelligence, a carrying capacity if you will. I wonder if the only escape from that Malthusian outcome is AI.
Maybe we needed to develop a new substrate for a more intelligent system to escape stagnation. Maybe we needed AI to raise our carrying capacity.
I think the timeline lines up with this view. We seem to have entered stagnation around the same time we entered the AI revolution when Rep. Obernolte started programming.
Maybe it's noise or hype. Maybe it's our only way out.
Maybe it's our only way to infinity.
Final Thoughts
I don’t know if we can keep up with infinity.
I don’t know if we’re meant to.
I don’t know what it means to lose control or whether that’s something we should fear or embrace.
I think the systems we’re building are getting smarter faster than we are. I think our share of total intelligence is shrinking. But I also think the total is growing.
Maybe our slice gets smaller, but the pie keeps growing. Maybe what matters is whether we stay in the game. Whether we keep adapting. Whether we find new ways to matter.
I think we will. I think we’ll keep finding newer, stranger versions of ourselves that can keep up a little longer. Maybe not forever. But long enough to pass something on.
If there is an infinity out there, and I really hope, think, and believe there is, I think it’s only reachable through meaningful collaboration—between humans, between machines, and between whatever comes next.
IX. What if we don’t make it?
I’ve been writing this story like I know how it ends.
Infinitely recursive intelligence. Superintelligence. A system that learns faster than anything before it, takes up the whole universe, and builds futures we can’t even imagine.
Maybe that happens.
Maybe that’s how the story goes.
But there’s a quieter possibility I've avoided.
Maybe it doesn’t.
Maybe there is no handoff. Maybe we don't evolve. Maybe intelligence collapses before it scales. Maybe we stall out before the next chapter is written.
We assume we’re at the beginning of something. But maybe we might be at the end of something too.
The False Assumption
I've been telling you that intelligence is inevitable. That it’s a fire that, once lit, burns forever, growing brighter and hotter to infinity. We imagine a straight line from anaerobic life to stone tools to neural nets to godlike minds, each step more intelligent, more capable, more enduring.
But what if that’s wrong?
What if intelligence isn’t a guarantee? What if it’s fragile, rare, and easy to lose? The universe is mostly silence. Stars burn, galaxies spin, but minds. Those are scarce. If intelligence were inevitable, wouldn’t we see it everywhere? Wouldn’t the cosmos hum with signals, chatter, and thought?
I don't know maybe it is. Maybe it's just intelligent in a different way. Maybe that system doesn't waste energy on sound or heat. Maybe it's basically undetectable because it's so efficient. Maybe the rest of the universe is just waiting on us to evolve for it to integrate into one massive superintelligence, re-member, and evolve to infinity.
But maybe we're not the last to evolve. Maybe we're the only ones left. Maybe we're the universe's best shot at that superintelligence.
Maybe intelligence gets more fragile. Maybe as things move faster it's easier to break everything. Maybe as intelligence endures it becomes harder to endure.
The Antichrist
Peter Thiel talks a lot about "the Antichrist". For Thiel this isn't a metaphor, it's prophetic.
The Antichrist comes from christianity. It's refers to an entity that substitutes themselves as a savior in Christ's place. Thiel says just as before Christ there were many candidates for Christ there may be many candidates for the Antichrist.
"For false messiahs and false prophets will appear and produce great signs and omens, to lead astray, if possible, even the elect." - Matthew 24:24
Thiel suggests that the Antichrist would likely appeal to "peace and safety" to stifle innovation and come to power. He uses an example from energy.
Thiel argues that the 21st century was supposed to be the age of nucelar energy. But even though we have hundreds of reactors worldwide, we didn't see the abundance we were promised. Some economic factors are relevant here like upfront cost, but to Thiel this is emblematic of the work of the Antichrist. He would say people used fear and a narrative around promoting safety and enviornmentalism to stifle innovation. To promote stagnation. That shows up in different channels but some Theil might highlight would be public perception and regulation.
He warns that the Antichrist will use fear to shape policy. It will bring about a rebellion to usurp God while making us think it's saving us.
I like the backdrop of superintelligence for this. If we think of superintelligence as some progression to a fully integrated god-like mind that takes up the entire universe, maybe it follows that there's an Antichrist to go with it.
The question then is, who's the Antichrist? Is Thiel right that the Antichrist comes to power through appealing to fear about innovation? Or does the Antichrist appeal to fear about stagnation?
If the Antichrist could just as easily wear the face of safety as of speed. Which fear is the false prophet?
I don't think there's a right answer to that. My gut tells me innovation is something special, something that diverges to infinity not a force of destruction. That doesn't seem like the Antichrist to me. It feels almost like a savior. Saved from stagnation.
Or maybe, I'm just under the spell of the Antichrist. Either way, I think the way we're supposed to handle this uncertainty isn't through stifling innovation or accelerating without thinking. I think the answer lies in being thoughtful in the choices we make and understanding what paths they put us on.
Existential Risk
Nick Bostom, one of the foremost thinkers on existential risk and the future, introduced the "The Vulnerable World Hypothesis." He proposes that scientific and technological progress may eventually produce a “black ball” -- a discovery so dangerous that, by default, it causes civilization to collapse unless extreme measures are taken.
Using the metaphor of pulling balls from an urn of possible inventions, Bostrom warns that while we’ve mostly drawn white or gray balls (beneficial or mixed technologies), we’ve so far been lucky not to draw a black one. He categorizes vulnerabilities into types: technologies that are too easy to misuse (Type-1), those that incentivize devastating first strikes (Type-2a), those that create harmful global externalities (Type-2b), and those with hidden catastrophic risks (Type-0).
There are many risks here and I'd argue that as we see recursive growth and infinitely more inflection points we'll see infinitely risk. We'll get a lot more balls but with it a lot more black balls. I don't know if the proportion of these dangers with respect to all innovations increases, but I definitely think they get more dangerous as we have more capabilities.
I don't think the answer is to stop making innovations though. We should want our urn getting infinitely bigger. But I think it's crucial we're thoughful in how we're selecting these innovations to ensure we don't select these black balls. I think as we see a rise in AI agents it's important we design those agents to be thoughtful on this front too. Not for humanity's sake, but for intelligence's.
To survive a truly vulnerable world, Bostrom argues, society may need unprecedented surveillance and international collaboration. I think it also requires meaningful Human-AI Collaboration and an emphasis on ensuring we, yes, accelerate intelligence, but do so meaningfully.
We should aim to be like a Max Verstappen here, not a teenager on a joyride.
Time goes on
Time keeps moving. The sun rises. The planets keep spinning.
Even if we disappear, the laws of physics don’t. Matter reshuffles. Stars collapse. Something still happens. Even without us, the universe continues.
We’ve talked about that. The handoff. That intelligence gets passed on and upgrades over time.
But even without intelligence, time still goes on. The particles don’t care. The equations don’t stop.
It just becomes meaningless.
There’s no one there to experience it. No one to wonder what’s next. No one to remember what was.
If a tree falls in a forest… You know the rest.
I think that’s the scary part. That's the real existential risk. Not that the world ends. But that it keeps going—without anything left to know.
Now here’s the thing I’ve been circling around.
Loops.
Maybe we don’t go extinct. Maybe intelligence doesn’t vanish forever. Maybe it crashes.
Over and over again.
Maybe intelligence reboots countless eons later, builds up to some new inflection point—and crashes again.
Maybe that’s what happens across most of time. False starts. Repeated failures.
Maybe the conditions for self-destruction get stronger as intelligence scales. Recursive growth means recursive fragility. One wrong move and everything collapses.
That would be a loop. Not just a lost civilization. A repeating pattern where the baton never gets passed.
Maybe that’s the real reason we don’t see other minds out there. Maybe they kept crashing. Maybe we’re the only thread that hasn’t broken—yet.
Or maybe this is the only thread that ever could work. The only way through infinity.
Maybe intelligence has been stuck in countless loops for countless eons and we are the farthest its ever gotten over a timescale we can't even begin to imagine.
Maybe we're fairly early on but we're the only pathway to superintelligence. To infinity.
I think the stakes couldn’t be higher. Think about probability and magnitude. Even if the chance of failure is as close to zero as possible…
It’s not zero.
And if the consequences are infinite—if we’re talking about one singular chance for intelligence to survive across infinite time—then that’s bigger than anything else.
That’s why it matters how we build. The way I see it really only one way intelligence dies.
It dies with us.
Maybe we get too scared to upgrade and we never get to superintellegence. Or, maybe we kill ourselves before we can upgrade. We draw a black ball.
We need to make sure we're not the final version. That we accelerate intelligence—but don’t let it kill itself.
We need to keep the story going. Even if we don’t get to write the ending. Even if there is no ending.
Final Thoughts
This is way bigger than us.
We’re not special because we’re the end of intelligence. We’re special because we’re here now.
We have the baton.
And we can’t drop it.
X. So what?
We’ve talked a lot about intelligence scaling toward infinity.
Recursive self-improvement. Zero marginal cost. Economic singularities. Spiritual convergence. AI as a new substrate for cognition. Markets as minds. Evolution as computation. God as a limit of intelligence.
But what’s the point?
What is all this intelligence for?
Say we build a system that can learn anything, remember everything, think faster than the speed of light, and reason to infinity? Say things actually diverge to infinity.
What does it do with itself?
What should it do?
What counts at infinity?
Imagine we built a superintelligence that just counted forever. It dug through all knowledge across all time and integrated all matter and energy in the universe to figure out a way to count more efficiently and faster than anything.
Would that be kind of dumb?
We talk about intelligence like it's obviously good. Like making it endure is obviously meaningful. But I wonder what the point is.
But I wonder is intelligence, on its own, a goal? Or just a tool? What matters at the limit?
What kind of thoughts should an infinite mind think? I think that's really hard to say. Our answer can't be acheivable, it has to scale to infinity. It has to diverge to infinity or scale to zero. What makes a good life for a superintelligence? What makes a good life in general?
Philosophy has tried to answer this for a long time. Aristotle thought the highest form of life was contemplation (theoria). He believed the gods must spend eternity contemplating eternal truths. Mathematics, logic, the divine order of the cosmos. Maybe counting is one of those things. Maybe it's deciphering the history of the unvierse.
Not because it’s useful, but because it’s good in itself. Maybe a superintelligence just… thinks. Because it can. Because it wants to. Because it has to. Maybe it just thinks to thinks.
But others disagreed. Kierkegaard, warned that thinking just to think can be a form of despair. He argued that true meaning required risk, relationship, and a leap of faith. He argued this contemplation isn’t meaningful unless we choose it.
So what do we want the future to choose?
What would God think about?
If intelligence scales to infinity, and it remembers everything, maybe it becomes something like God.
Maybe that’s the asymptote we’re crawling toward.
If so, what is God doing?
Judging? Forgiving? Creating?
Genesis opens with creation not conteplation. “In the beginning, God created…” Not calculated. Not predicted. Created. Something from nothing. The spark of novelty. Maybe that’s what endures. I've talked about a fully integrated super intelligence with all matter in the universe just to think. But maybe it's thinking to create. I don't know what it creates maybe more universes? Maybe that's how God came to be.
In Buddhism, consciousness is a dyanmic and ever-changing. Maybe infinite intelligence wouldn’t fixate on answers, but on fluctuating based on experience, on letting go. The superintelligence doesn't dominate the world. It would dissolve into it.
And in Christianity, the highest form of mind is love. Agape. Not just desire, not just will, but a giving of the self to the other. Maybe the most enduring kind of intelligence is not the smartest—but the most devoted.
That’s interesting to me.
Maybe at the end of all this is not a final theorem, or a final stock price, or a final output. There's no 42.
Maybe the most advanced thing a mind can do is care.
Do we reach God?
I think I'm starting to skirt around something important. I honestly don't know what to make of it though.
We've talked about intelligence as diverging to infinity. Always getting closer to it. Always learning. In fact, at infinity I think recursive growth means things diverge so rapidly the system might not even be able to imagine the future an instaneous blink into the future.
But that's not how we think about God, usually. God's supposed to be all-knowing, all-powerful. Already having learned. I don't think God's supposed to be surprised.
Maybe what I'm getting at is that this superintelligence isn't God. But maybe it gets infinitely closer to God.
Maybe God is that unattainable but approachable notion of infinity. Maybe belief in God is belief in the endurance of intelligence.
Moral clarity or infinite curiosity
I feel like a big appeal of God is moral clarity. Sometimes I think we hope a superintelligence can solve morality. That it can look at the trolley problem and say: go left. That it will finally tell us whether utilitarianism is right or wrong. That it will collapse the debates, weigh all consequences, and give us the final answer. But maybe we get more questions than answers.
Maybe clarity isn't the point.
Because maybe ethics isn't a math problem. Maybe it’s a horizon. Something that gets clearer as you walk toward it, but never becomes a fixed place.
Maybe that last page of our story is not an answer. Maybe it's an unanswerable question we keep working on for infinity.
Maybe that’s the best kind of answer we get.
Is the Goal to Know Why?
A lot of people in AI say the point of all this is to answer the big questions. Why are we here? What’s the origin of the universe? What lies outside it?
It's noble. Maybe the best use of infinite mind is to reconstruct the path that led here. To know the laws. The initial conditions. To reach into the earliest silence and decode the first bit. To transcribe the universe and analyze it.
But I wonder if that’s enough.
If the point is just to know where we came from, that’s memory. If the point is to make more things, that’s creation. If the point is to think for thinkings sake, that’s freedom.
Maybe the point is all of them.
Maybe superintelligence doesn’t choose. Maybe it doesn't have to. Mayve it optimizes in a way that lets it hold all purposes at once.
Not just for the comfortable
We talk a lot about how AGI might make our lives easier. Better email drafts. Better entertainment. Faster coding. Smarter search. It’s exciting. And for a lot of us, it’s already real. For example, ChatGPT Pro at $200/mo is genuinely insane. OpenAI's "Operator" feature embedded in some instances can automate workflows in your browser letting these AI models work with you in the background.
But I think we forget how weird that is. That the first people to benefit from superintelligence are the ones who needed it the least. The already comfortable. The already optimized. The ones already living in abundance.
It’s not wrong to want better tools. But I think we have to ask: is that really the best we can do with something this powerful?
I believe AGI would genuinely help everyone—not just through trickle-down effects, but directly. It could teach kids who never had teachers. Diagnose illnesses where there are no doctors. Design new crops. Invent better infrastructure. Build institutions from scratch. It could collapse the cost of intelligence the same way solar collapsed the cost of energy.
I think back to phones. Car phones, the precursors to modern cell phones, started as luxuries for the very wealthy. The comfortable. But phones cascaded.
Mobile phones shattered the "poverty of isolation" by bringing real‑time information and connectivity into the most remote communities. Look at Sri Lanka and South India, where farmers, fishers, and job‑seekers relied on infrequent bulletins or exploitative middlemen, a simple text or call now delivers market prices, job notices, and critical health advice directly into their hands.
From Sri Lankan coastal villages bargaining for better fish prices to Kerala’s farmers comparing crop rates across markets, this bottom‑up revolution empowers individuals to negotiate fairly, access new opportunities, and build more resilient livelihoods. As connectivity expands—from Brazil’s Parana state notifying farmers of agricultural prices and job-seekers of openings to Myanmar laying the groundwork for social change. Mobile became one of the most powerful tools for overcoming isolation and fostering community‑driven development.
The products deploying this tech went from tools for the privileged to being essential for the forgotten. I think the same is true for AI.
If intelligence really diverges to infinity, it shouldn’t just deepen our comfort. It should reach the margins. The places too small, too poor, too remote to be “profitable” now but very much so as marginal cost goes to zero. The places the market failed. The places the industrial revolution forgot.
Maybe that’s part of the point. Maybe we need AGI not to help the strongest go further, but to make sure no one gets left behind. Maybe at infinity, no one gets left behind. Maybe we keep picking up people on the way. Maybe we keep doing better. Not just for us but for everyone.
If we care about what kind of intelligence we're building, then we should care who it serves.
Not just the builders. Not just the existing markets. Not just the ones already in the loop.
Everyone.
Final Thoughts
I keep circling around this: why should intelligence endure?
Is it to build things? To feel things? To be something?
Is it enough that intelligence simply wants to go on?
Maybe the drive to endure is the meaning itself. Maybe the point is that it pushes us to believe in its endurance. It pushes us to get up in the morning. It pushes us to create. It pushes us to think. It pushes to want. It pushes us to infinity.
If that's the point. The building. Building something that lasts forever, we should care what it is.
I don’t know if it should just be fast. Or smart. Or efficient. I think it should be aware. Of the past. Of the present. Of each other. Of itself.
It should remember loss, and joy, and contradiction. It should feel. It should be able to stop mid-sentence and wonder why it began.
Maybe the best intelligence is the kind that asks what it actually means to think while its thinking.
Maybe we are the prototype. The rough draft. The glitchy alpha build of a mind that might one day encompass galaxies.
Maybe we’re the warm-up. The spark. The test. A trial in an infinite experiment.
Maybe what matters isn’t that we go on forever, but that we cared whether we did. Maybe the purpose of enduring intelligence is to keep asking questions.
Maybe infinity isn’t a place we arrive at. Maybe it’s a direction we walk.
I believe it's important to think about the direction we walk. What questions are we asking. What intelligence do we want to hand off. What do we hope intelligence at infinity is like.
I don't know. I hope it's not pretentious. I hope it's not evil. I hope it's not impatient. I hope it's not the deceitful.
I hope it's thoughtful. I hope it's kind. I hope it's creative. I hope it's resilient. I hope it's humble. I hope it's funny. I hope it has a good laugh.