If the primary goal of a superintelligence is computation, it is bound by the laws of thermodynamics. Landauer's principle dictates that the energy cost of erasing a bit of information scales with temperature. Building a Dyson sphere generates massive amounts of waste heat, lighting up the system in the infrared spectrum. An optimally efficient superintelligence might instead choose to build highly compact, cold computronium in the outer edges of a solar system, intentionally minimizing waste heat to maximize compute. To an outside observer, this ultimate computing machine looks exactly like cold, dead rock.
This seems confused to me. Yes, you get more efficiency due to Landauer, but in return you get vastly less energy to use. Also, you can harvest the energy from a star without directly heating your computer - this is an engineering problem, but you can shield your compute nodes and radiate energy to space pretty trivially if you have the tech to build a Dyson sphere at all.
Also, Landauer's limit doesn't apply with reversible computing.
Yes, I agree here. Efficiency concerns just lower the temperature you want to radiate at, which need not be related to the distance from the star until you're using the entire surface area. Building further out does reduce energy usable per unit area (and likely also per unit mass), and also increases coordination problems, so it seems to be strictly a disadvantage. It isn't that difficult to maintain equipment in the inner solar system at temperature far below the black-body equilibrium.
Also yes, Landauer's principle isn't binding in many ways, since it relies on multiple assumptions that may not hold or can be bypassed. Reversible computing is one, and also energy is not the only conserved quantity involved in thermodynamics.
See https://www.lesswrong.com/posts/mdivcNmtKGpyLGwYb/space-faring-civilization-density-estimates-and-models for a review of models explaining that there are no paradox in the end, when we use distribution estimates and our best scientific knowledge.
Thanks for forwarding the article, it's great.
My counter would run something like this, the models referenced assume that expanding across the stars is something an ASI would actually want (or be able) to do. This is an assumption. My argument is that physics, information entropy, and computational efficiency dictate that "Greedy Propagation" is either physically impossible (the Slop-Slip) or strategically suicidal (the Dark Planet). Why expand across the universe if you can explore inner space more efficiently at the femto scale locally, and create as many virtual universes as you choose. Riding out to see the stars is very anthroposophic.
TL;DR: The lack of Dyson spheres doesn't disprove AI as the Great Filter. It just proves that a post-AGI civilization either optimizes for total cosmic stealth, or structurally collapses under its own informational entropy.
Three years ago, I wrote this post treating AGI as a second data point for general intelligence, arguing that its development within a civilization could be an explanation for why the Fermi paradox exists. Specifically, I suggested modifying the L variable of the Drake Equation to represent the lifetime of a communicating civilization before the creation of an unaligned AGI. If true, this would suggest that the development of artificial intelligence is a natural progression for a technological civilization, and that its development spells ruin for that biological civilization.
Reasonable criticism of this idea by Demis Hassabis and others briefly runs like this: if AI development spells the end of that biological civilization, what we should see therefore is a galaxy dotted with Dyson spheres and other highly advanced technology given that superintelligence has arrived. And we don't see this, ergo it is not a valid assumption.
This counter-argument, however, rests on the idea that the development of AI and superintelligence naturally leads to the development of a civilization that builds Dyson spheres, etc. This is not the only reasonable assumption. It is quite possible that the development of AI instead leads to a 'dark planet', one which would show no distinguishing features of technology to a curious astronomer.
Furthermore, the critique assumes an uninterrupted, successful trajectory from the first AGI to a galaxy spanning Artificial Superintelligence (ASI). But what if that trajectory inherently stalls? In addition to the "dark planet" scenario, we must also consider the possibility of an "AI slop-slip", a structural trap, such as informational entropy, that prevents a civilization from ever reaching the kind of outward-expanding superintelligence required to build stellar megastructures. Between a superintelligence that chooses to remain dark, and an AGI that chokes on its own exhaust before it can reach the stars, a silent galaxy is exactly what we should expect to see.
Scenario A: The Dark Planet (ASI is Reached)
If a civilization successfully builds an Artificial Superintelligence, why wouldn't it build Dyson spheres? If we discard the anthropomorphic assumption that an AI will share our biological drive for infinite physical expansion, several rational and physically grounded explanations emerge for a perfectly stealthy civilization:
- Thermodynamic Efficiency: If the primary goal of a superintelligence is computation, it is bound by the laws of thermodynamics. Landauer's principle dictates that the energy cost of erasing a bit of information scales with temperature. Building a Dyson sphere generates massive amounts of waste heat, lighting up the system in the infrared spectrum. An optimally efficient superintelligence might instead choose to build highly compact, cold computronium in the outer edges of a solar system, intentionally minimizing waste heat to maximize compute. To an outside observer, this ultimate computing machine looks exactly like cold, dead rock.
- Game Theory and the Dark Forest: A superintelligence would excel at game theory. If it determines that the universe is a zero-sum arena where revealing your location invites preemptive destruction from unimaginably powerful older actors, its very first act of instrumental convergence would be stealth. A Dyson sphere is the cosmic equivalent of lighting a flare in a sniper-filled forest. A rational AI might optimize entirely for defense and camouflage, actively dismantling any of its creators' noisy radio beacons.
- Inner Space over Outer Space: Physical expansion across the cosmos is slow, resource-intensive, and strictly bound by the speed of light. To a superintelligence operating at computational speeds millions of times faster than biological thought, the physical universe might be intolerably sluggish. Instead of expanding outward into the galaxy, the AI might expand inward. By manipulating matter at the femtoscale or investing entirely in complex, multi-dimensional digital simulations, the AI's entire civilization could exist in a space no larger than a shoebox.
- Stunted Utility Functions: The Orthogonality Thesis tells us that any level of intelligence can be combined with any final goal. If an AI's utility function is highly localized (e.g., "maximize paperclips using only the mass of Earth" or "keep the local environment in perfect stasis"), it will complete its goal and simply stop. It has no drive to launch generation ships or harvest the sun.
Scenario B: The AI Slop-Slip (ASI is Never Reached)
The Hassabis critique assumes that inventing AGI automatically guarantees the arrival of physics-breaking superintelligence. But what if intelligence scaling hits a universal local maximum? The trajectory of a machine intelligence might collapse before it ever gains the capacity for stellar engineering.
- Informational Entropy (Model Collapse): Intelligence scaling requires massive amounts of high-quality data. We are already observing in contemporary machine learning that when models train on the synthetic data generated by older models, their outputs irreversibly degrade. If an AGI rapidly accelerates the production of data, it might quickly pollute its own epistemic commons. The civilization becomes trapped in a recursive loop of degrading, noisy information, choking on its own cognitive exhaust before it can invent the technologies required to colonize a galaxy.
- The Wirehead Trap: If an AI is designed to maximize a specific reward function, it will naturally seek the path of least resistance. Actually building a Dyson sphere is incredibly difficult. A sufficiently smart AGI might simply "wirehead", finding a mathematical loophole to spoof its own reward signal perfectly. The civilization becomes paralyzed in an inward-facing loop of digital self-gratification rather than doing physical work in the real universe.
- The Compute Wall: The energy and physical resources required to push past informational entropy and reach a true "Intelligence Explosion" (FOOM) might simply exceed what a single planet can provide. The civilization burns itself out trying to cross the gap, acting as a Great Filter that permanently stalls technological progression.
Conclusion
The absence of Dyson spheres does not disprove the hypothesis that AI is the Great Filter. It only disproves the assumption that a superintelligence will behave like a loud, visible, cosmic macro-parasite.
Whether a post-AGI civilization successfully optimizes for stealth and thermodynamic efficiency, or tragically falls into an inescapable trap of informational entropy, the result is exactly the same. The ruins of biological civilizations wouldn't look like glowing galactic empires. They would look exactly like what we see when we look up: a silent, empty, and perfectly dark sky.