We don’t see traces of large AI-based civilizations either. Domination of AIs over biologicals is an orthogonal issue.
Of course, it might be that our search process is not good enough. Perhaps we are not alone, but not seeing that. (There has been a long period when no planets have been observed outside the Solar System, then a long period when no Earth-sized ones have been observed. This tells us to be cautious about the quality of our current observations.)
Other than that, if we do assume a typical transition to AI-dominated setups happening often, then either those AI ecosystems tend to destroy themselves together with their neighborhood via internal conflicts or other technological catastrophes (basically, a reminder that advanced AIs also have to grapple with their own existential risks), or they tend to make a decision of keeping a “low footprint” for various reasons (like need for stealth due to potential danger from other civilizations or high levels of tech enabling efficient “low key” architectures of civilizations).
If ASI is non-dominating, then an empty sky requires a double explanation: both biological civilizations and the ASIs they create must usually either choose not to produce a detectable footprint, or fail to do so.
The biological part seems especially implausible. Extrapolating from human history, it is hard to expect biological civilizations to be uniformly restrained, coordinated, or low-impact over long timescales.
A dominating ASI makes this less surprising. Compared to biological populations, it is much more plausible for a dominant ASI to have coherent long-run preferences and enforce them consistently. So if such an ASI prefers a small footprint, that helps explain a boring sky with fewer assumptions.
Biologicals are not well-equipped for long-range space travel.
They would need to be heavily modified/reengineered for radical space expansion (so that’s really a strange intermediate case; who knows how the reengineered entities would look like and what they’ll be made of).
I think the Fermi paradox puts a strong constraint on plausible AGI futures.
Fermi solutions collapse into two buckets:
“First” is live, but it is a narrow version of “early,” and I do not think we currently have arguments strong enough to make it the default. Early humanity is easy to defend. First humanity is much harder. Early does reduce the required strength of the forcing function and I grant that "early + weak forcing function" is live but with a modest p.
An analysis of the second bucket leads to a strong case for AGI being that forcing function.
The argument:
If this is right, then the filter ahead is better understood not as an extinction filter, but as a large-footprint filter. Civilizations like ours do not necessarily disappear; they are filtered out of galaxy-loud futures.
Extinction is one member of this class, but only one. The full class contains:
All three require dominating AGI: AGI with enough power to enforce a civilization-wide outcome rather than merely advise, assist, or bargain.
Persuasion is the weakest of the three, because the task is not to persuade many humans for a while. It is to secure durable, universal convergence to boundedness across time. Even for a superintelligence, that seems like a much narrower path than simply preventing deviation.
Eradication also seems weaker than coercion, though for a different reason. It requires AGI to both choose eradication and to choose to contain itself to a small footprint. That is possible, but it seems consistent with a narrower set of motivations/values.
Coercion looks like the most generic substrate. If AGI converges on any regime that treats open-ended expansion as dangerous, immoral, destabilizing, or value-corrupting, then coercive suppression of expansion is the natural outcome.
This has an uncomfortable implication for alignment. A lot of alignment work seems implicitly aimed at non-dominating AGI: systems that help humans pursue their own ambitions, preserve broad human agency, and allow open-ended flourishing. I think the Fermi paradox is evidence against that class of futures being possible.
If non-dominating AGI were a robust path to long-run flourishing, the sky would look different.
So alignment work that assumes we can get stable good futures without domination should bear a very heavy burden of proof. Alignment work that aims for persuasion-only solutions should also be viewed skeptically.
The more realistic target may be narrower. Shaping which form of domination wins. Making coercive boundedness more likely than eradication, and making the coercive regime more humane, stable, and predictable.
That is not a pleasant conclusion. But I think it is more consistent with the Fermi paradox than most of the current alignment picture.