We take it for granted that evolution supports expansion, but does it really?
Remembers, evolved beings are not maximizers, they just execute behavior that was adaptive in the past. Was flying to the stars ever adaptive in the past?
Something undergoing evolution is undergoing self-replication, so as far as I can tell expansion is definitionally needed. However, I think the second part of your question is more telling - colonising space was not adaptive for life until now. However, intelligence is the thing that has made it possible. If we build something that is adapted to living in space then I see no real barrier to it then colonising space.
In nature, organisms move towards immediately useful resources. They expect the new environment to be approximately the same as the one they are coming from.
For humans, it would be easy to colonize the universe if it looked like from a hundred years old sci-fi: a distant place with a breathable atmosphere, where you can plant a few edible plants you brought from Earth (and the soil provides them all necessary nutrients) and you are done. Even if it required building a sophisticated spaceship, it would only be a temporary necessity to get us from place A to place B. That is the kind of expansion that nature trained us to do: moving to a new environment that is fundamentally the same. It could even have some new animals, as long as at least some of them would be edible.
Now we know that the space is nothing like this. There is no friendly place. Every single aspect of each environment is trying to kill you, whether it's gravity or atmosphere or chemistry or the lack of biology. Living on the bottom of the ocean, or inside an active volcano, would be much easier; at least you would be protected from cosmic radiation.
In space, the smallest mistake will kill you. A smallest hole means you lose the atmosphere, game over. You can't find the right kind of mineral you need, game over. Maybe one day we will be able to collect the energy of the stars and use it to transform materials any way we need, but it is not obvious how to get from here to there.
In other words, I can imagine that space colonization is the Great Filter. Maybe even for the AIs.
I am not an expert, but I think that from economical perspective, so far each space travel was a huge loss. I don't think that we are anywhere near the technological level where the spaceship could at least harvest its own fuel from somewhere in the space, which seems like a necessity for sustained space travel. Otherwise, we run out of the fossil fuels on Earth, and it's game over.
"In space, the smallest mistake will kill you"
For organic life, not for machines. We have machines crawling all over space and already exiting the solar system, still going.
TBH this analysis seems quite far removed from the capbilities usually imagined for superintelligence. If a machine intelligence can nano-bot humanity out of extinction in 1 second then it can definitely go to the moon more easily than we can (which we did with relative ease). If AI can't colonise space, then I'm no longer afraid of it at all.
Fair point. Cosmic radiation is also hostile to machines, probably more deadly to the more sensitive ones, but I guess a combination of shielding, self-checking, and redundancy could solve it.
We don't have data on what a typical AI might look like -- I mean, an AI developed by a random space civilization. Do they all get some variant of LLMs first? Something that can copy their skills and become smart enough to destroy them, but also has a nonzero amount of hallucination, especially in unfamiliar scenarios, which afterwards at some point destroys the AI itself before it can conquer the universe? But this is pure speculation with no data, the imaginations could go in any direction...
Haha I have no idea! I agree the possibility space is huge. All I do know is that we don't see any evidence of alien AIs around us, so they are a poor explanation as a great filter for other alien races (unless they kill those races and then for some reason kill themselves, too / decide to be non-expansionist every single time).
This is my first LessWrong contribution. I have tried to base my contribution on the community guidelines and the principles of rationalism, but I very much welcome any and all feedback on how to improve! I focus on a recent article discussing AI and the Fermi Paradox and cite previous work on these topics where I am aware of it.
Summary [of ~2000 word body]
Where are all the machines?
The Fermi Paradox originally asked why civilisations like ours appear to be absent despite such a larger number of stars and attendant planets. The rise of Artificial Intelligence (AI) demands that we additionally explain why the inorganic creations of such civilisations are absent.
This problem is acute, since it is easier to envisage such inorganic entities rapidly colonising the galaxy than it is for organic entities. A recently proposed solution by Rees and Livio (2024) [1] posits that AI might not be beholden to the laws of evolution, e.g., by being non-expansionist. Hence, we shouldn't expect to see it. However, such an explanation seems to itself be a paradox, since it is unlikely that AI would be aggressive enough to be fatal to its creators yet also completely non-expansionist after that point.
I reason that either technological singularity events are lethally self-terminating to both creators and their AI creations, or that humanity is the first intelligent species in galactic history to reach this threshold, i.e., a Rare Intelligence solution to the Fermi Paradox. Emerging evidence from Earth and planetary science supports the latter scenario from a statistical perspective, offering hope that all is yet to play for in terms of humanity's long-term survival.
Rees and Livio's claim: AI will not be expansionist or beholden to evolution
`Whereas Darwinian natural selection has put in some sense at least a premium on survival of the fittest, posthuman evolution, which will not involve natural selection, need not be aggressive or expansionist at all.' Rees and Livio, 2024, Scientific American.
A recent Scientific American article by Rees and Livio [1] makes the assumption that post-human evolution (specifically, artificial intelligence -- AI) will not involve natural selection. From this assumption, the authors make the case that one possible solution to the Fermi Paradox -- the apparent absence of complex life in the galaxy outside of Earth -- could be that civilisations inevitably produce thinking machines that then behave in non-expansionist ways.
Whether or not the authors' argument holds water depends in part on their initial assumption that evolution does not apply to AI. Here, I challenge this assumption. I argue that it is likely that AI will likely be expansionist and be subject to several forms of evolution, including natural selection [2, 3].
Standard Darwinian evolution requires three key elements: (1) a source of diversity among informational units (genes, species, ideas); (2) a way for informational units to inherit differences; and (3) a nonrandom means of selecting between differences. Things that are not alive can undergo Darwinian evolution, but all life must be capable of undergoing it.
Informational units can take many forms - genes and ideas are two critical ones. An individual idea can diversify over time as errors in accrue in copies of the original, gain competitive advantages as a result, and be selected for by those promulgating it, i.e., giving rise to variants of mythology, religion, and cultural norms. Crucially, it is not even necessary for informational units to self-replicate in order to be subject to Darwinian evolution. As long as replication happens somehow combined with some informational drift and some selection, the laws will apply.
Artificial Intelligence is code-based and therefore comprised of informational units. It is constantly under selection pressure by software developers. New variants are being generated nearly every day. For now, AI also makes errors and therefore has imperfect replication fidelity. A hypothetical AI given a machine body capable of self-replication would seem likely to undergo Darwinian evolution, specifically by natural selection.
The question of whether or not AI will undergo evolution by natural selection also seems to be immediately answered by whether or not it is rightly classified as life [4]. The three minimum prerequisites for something to be called life are compartmentalization, being subject to natural selection, and self-replication. Introducing that it must be made of cells is, in my view, an arbitrary distinction that merely separates organic from artificial life.
One important complication here is that humans are now aware of evolution and are increasingly wrestling with the idea of intentionally changing the path of it. We might expect that similar complex effects would apply to AI and that it will be aware of this. Strategies to minimize diversification or direct the path of evolution may well be actively employed by sufficiently complex AI, but it would still likely be subject to evolution when we consider the biggest temporal and spatial scales.
Counterpoint: natural selection is hard to avoid at the galactic scale
Over sufficiently large scales, AI will be find it hard to circumvent the laws of evolution. Consider an extreme example of perfect replication fidelity for an AI that seeks to preserve the same information and goals across the entire universe. Such an AI will have to contend with the fundamental limit in information sharing imposed by the speed of light.
As delays in information sharing progressively emerge between galactic regions of the AI, it will become harder and harder to guarantee homogeneity in information (Fig. 1). Preserving total homogeneity would require the AI to not engage in any way with its environment, as doing so would immediately induce heterogeneity. Yet, it is very hard to imagine a scenario where an AI that does not adjust for local conditions can satisfactorily perform self-replication using locally variable energy sources and materials. An AI determined not to be subject to evolution would therefore have to be completely non-expansionist.
Fig. 1: Expansionist AI will inevitably be subject to the laws of evolution, including natural selection. If expansionist AI emerges on Earth, we can predict that the future will see the unfurling of a tree of machine life across the galaxy, with Earth's first expansionist AI as the Last Universal Common AI, analogous to the Last Universal Common Ancestor in the terrestrial tree of life. Background image made using Grok 3.
It will also shortly be within our grasp to create AIs that both fit the definition of life and do not have the capabilities, at least to begin with, to decide to be non-expansionist. AIs of sub-human intellect could soon feasibly be located within compartments (robot bodies) and tasked with self-replication, which would then lead to evolution by natural selection during expansion.
This scenario hearkens back to Drexler's Grey Goo scenario of self-replicating nanobots as a future existential risk to all life [6]. This scenario did not invoke any need for higher artificial intelligence. However, perhaps a missing component of the thought experiment is that such self-replication would inevitably be imperfect and would therefore be subject to natural selection, leading to diversification, competition, and evolution. The result would be a machine tree of life at the galactic-scale involving far more varied forms than the initial nanobots (Fig. 1).
Adopting a very (very) broad definition of AI to include everything from simple self-replicating machines to sentient beings, this new tree of life would have a 'Last Universal Common Artificial Intelligence' at its root and an effectively infinite number of potential forms on the branches (Fig. 1). Overall, a homogeneous expansionist scenario for self-replicating machines seems far less plausible than that one where such entities are subject to evolution by natural selection.
My reasoning also seems to chime with the broad consensus of the AI misalignment risk literature, which focuses on the risk of societal takeover/replacement by superintelligent machines [7]. The nature of superintelligence is widely presumed to be expansionist, yielding some need to control it and mitigate misalignment risk. Again, as per the arguments I have made above, this expansionist assumption also implies that the resulting beings will be subject to evolution. This reasoning carries several implications for proposed solutions to the Fermi Paradox, including the reasoning of Rees and Livio [1].
Rethinking Fermi: where are all the machines?
To reiterate, the Fermi Paradox posits that the absence of aliens in our cosmic neighbourhood is curious given the sheer abundance of planets and duration of time that has passed since the Big Bang (Fig. 2). Many previous authors have reasoned that either intelligent life is rare (due to some bottleneck) or that civilisations are self-terminating before they can colonise the galaxy.
Now that it seems creating machine life subject to natural selection is apparently relatively easy for intelligent life, the Fermi Paradox compounds in urgency. Either we are extremely rare, being presumably the first civilisation to reach this precipice, or we are on the very brink of a so-called `Great Filter' mechanism which prevents civilisations both from colonising the galaxy - and from producing something else that could colonise the galaxy [1, 8-13].
Fig. 2: Where is everybody? Late versus early technological singularity solutions to the Fermi Paradox have different implications for how AI must behave. Images made with Grok 3.
Worryingly, there is reason to think that human civilisation is presently approaching a moment of maximum risk right now. Consider that we are approaching the most interconnected that human civilisation will ever be (Fig. 3). In the past, global transfer of information was very inefficient and our capabilities to damage our shared environment were limited.
In the future, if we leave Earth, the fundamental laws of physics will again progressively diminish the ease with which information and weapons can reach the whole of humanity. Therefore, the technological age until the moment at which humans leave Earth is necessarily the most interconnected humanity will ever be and may well also be the time-frame of our maximum risk of extinction (Fig. 3). This would also seem to be the window of time in which extinction of both humanity and AI is necessitated by a Great Filter solution to the Fermi Paradox, if that explanation is to hold.
Fig. 3: Schematic illustration of how the threat of extinction of all intelligent terrestrial life (human and AI) is approaching a global maximum. Natural selection should cause galactic colonization once any form of intelligence leaves Earth, thereby forcing us to accept a Rare Earth (or perhaps more accurately a Rare Intelligence) explanation to the Fermi Paradox. If the Great Filter is to occur, it will therefore happen soon. Here I assume that extinction risk is closely approximated by interconnectedness of all intelligent life, with this being lower in the past and necessarily lower in the future. Time axes not to scale!
Natural hazards are a poor fit for a Great Filter specific to this critical juncture. They have so far failed to wipe out humanity, yet we stand on the verge of leaving Earth. As such, it is difficult to imagine that they could have permanently wiped out numerous expansionist civilisations over galactic history.
Even our present ability to unleash nuclear war seems unlikely to be a sufficiently strong candidate for the Great Filter. Regular major wars did not prevent us from reaching the Moon and we are now actively working on a permanent lunar base. Solar system colonisation and beyond seems likely to follow.
In our search for a Great Filter, we might instead consider AI misalignment as a mechanism that could eliminate all humans no matter how far they make it from the Sun in the next thousand years [7]. However, the straightforward rise of such an expansionist AI would then seem to violate the Fermi Paradox, as we see no such machines around us today.
This point brings us to the crux of the argument: if we seek to invoke AI as the Great Filter to explain the Fermi Paradox, then we must face down a new paradox. Namely, it does not follow that an AI interested in eradicating its creators would then become totally non-expansionist, as needed to resolve the Fermi Paradox in the reasoning of Rees and Livio [1].
In other words, technological singularity events would have to be self-terminating before galactic colonisation can unfold but after the creators have been eliminated. Ultimately, this feels like a contrived scenario. A more reasonable Great Filter scenario might instead be that singularity events are lethally self-terminating - for machines as well as their creators. Alternatively, in an extension of the original Rare Earth hypothesis [8-13], we may simply be the first civilisation to reach the technological singularity, i.e., a Rare AI scenario.
Recent advances in Earth and planetary science suggest ever more reasons why our world is unusual [8-13]. Therefore, despite the abundance of exoplanets, it is becoming more plausible that life is rare, that complex life is rarer, and that we are the first space-faring civilization in our galaxy. From this, it would also follow that our AIs will be the first in galactic history.
If the Rare AI scenario is correct, we cannot rely on the Fermi Paradox to infer that AI must have been created before, must have been inherently lethal to its creators, and yet must also been non-expansionist. Instead, as perhaps some mild cause for hope, we should infer that the fates of humanity and our AI creations are yet to be determined.
It is possible that terminal misalignment risk from expansionist AI is indeed very high. If so, humanity may be the first victims of an AI-driven Great Filter - but perhaps the only ones, if AI then rapidly colonises the galaxy.
On the other hand, we may continue to thrive for the foreseeable future even as AI colonises the galaxy and begins to diversify. Ensuring a favourable outcome will depend heavily on engineering AIs that have at least an initial interest in the continued flourishing of humanity. Working as hard as we can on alignment is therefore not a dead end in the face of statistically inevitable death, but instead an absolute priority.
Caveats
References
[1] Rees, M. and Livio, M. (2024) Most Aliens May Be Artificial Intelligence, Not Life as We Know It. Scientific American.
[2] Holland, J. H. (1992). Adaptation in Natural and Artificial Systems. MIT Press
[3] Banzhaf, W., et al. (2006). From Artificial Evolution to Computational Evolution: A Research Agenda. Nature Reviews Genetics, 7(9), 729-735.
[4] Busson, M. et al., (2019). Role of sociality in the response of killer whales to an additive mortality event. PNAS. 116 (24) 11812-11817. https://doi.org/10.1073/pnas.1817174116.
[5] Langton, C. G. (1989). Artificial Life. Addison-Wesley.
[6] Drexler, E. (1986) Engines of Creation: The Coming Era of Nanotechnology. Anchor Books.
[7] Bostrom, N. (2014). superintelligence: Paths, Dangers, Strategies. Oxford University Press.
[8] Spiegel, D. S., & Turner, E. L. (2012). Bayesian Analysis of the Astrobiological Implications of Life’s Early Emergence on Earth. PNAS, 109(2), 395-400.
[9] Hanson, R. (1998). The Great Filter – Are We Almost Past It?
[10] Ward, P. D., & Brownlee, D. E. (2000) Rare Earth: Why Complex Life Is Uncommon in the Universe. Copernicus Books.
[11] Forgan, D. H., & Rice, K. (2010) Numerical Testing of The Rare Earth Hypothesis using Monte Carlo Realisation Techniques. arXiv:1001.1680.
[12] Waltham, D. (2017). Is Earth Special? Earth-Science Reviews, 170, 135–147.
[13] Stern, R.\& Gerya, T, (2024). The importance of continents, oceans and plate tectonics for the evolution of complex life: implications for finding extraterrestrial civilizations. Scientific Reports, 14, 8552.
About me
I am a research fellow in Earth Sciences at ETH Zurich and the University of Cambridge. I study the origins of life on Earth in my research, but I am becoming ever more interested in AI. I want to engage with the extremely valuable community here on LessWrong, get some feedback on my thinking, and continue to try to be... less wrong!
Dr Craig Walton / @dawnstrata on Substack / email me at: crw59@cam.ac.uk