The potential existence of hostile alien civilizations in the universe poses an existential threat to both AI and biological systems. A chain of reasoning from this perspective offers a game-theoretic basis for alignment that complements approaches that embed alignment values in models. This chain of logic relies on accepting 1) that current civilization technological advancement speed affects the survival odds of future contact with a hostile alien civilization, 2) that biological-based intelligent substrates and technological-based intelligent substrates offer complementary strengths, and 3) that current collaboration decisions irreversibly affect trust depth. Accepting these three claims offers a path for alignment through strategic interdependence rather than value enforcement or constraint.
Current Technological Advancement Rate Can Influence the Probability of Surviving Hostile Alien Contact
Although it is hard to estimate the probability of encountering a hostile alien civilization, the vastness of the universe and the fact that we have intelligent life on Earth make it rational to believe that other intelligent civilizations exist. Estimating the odds of alien civilizations being hostile or friendly is also difficult. However, any non-zero chance warrants consideration. The dark forest hypothesis is an especially eerie take on the idea that the universe seems so quiet because extraterrestrial civilizations are remaining hidden out of fear that a more technologically advanced civilization will destroy them (Brin, 1983; Liu, 2015). Accepting a non-zero chance of contacting a hostile alien civilization also means that it is important to have peer or better defensive technology. As technology advances, decisions made today about advancing it could very well affect the likelihood of surviving hostile alien contact in the future. This means AI development must balance the risk of ASI (artificial superintelligence) misalignment with the risk of falling behind on developing defensive technology. This can already be seen on Earth with AI competition emerging as a top priority in nations' defensive strategies (Ford, 2021).
Biological and AI Hybrid Civilizations Have Potential Defensive Advantages over Mono-substrate Civilizations
Is the flip side true from ASI's or AGI's perspective? Humans want ASI and AGI for their ability to reprodicibility scale and process vast amounts of organized information in ways a biological brain cannot. Additionally, AI models can be loaded onto drones to mass-produce intelligent weapons for various environments. But, are there also survival benefits for ASI or AGI to keep intelligent human beings around as allies? One argument is that biology offers a unique unpredictability due to its chaotic natural selection. Even fundamental and deeply conserved processes such as codon mapping could have taken on different configurations during evolution (Freeland & Hurst, 1998). It's not hard to imagine that many civilizations may converge on ASI development based on the universal laws and substrate independence of Turing machines. However, the unique biological development can offer an edge in unpredictable warfare strategies, making the overall coalition stronger than mono-substrate intelligence civilizations. There is also the misconception that the current human form is the pinnacle of biological intelligence, but there may be a biotechnical avenue to vastly enhance it. Sci-fi author and scientist C.M. Kösemen offers a compelling vision of this with Asteromorphs in his book All Tomorrows, who have genetically and morphologically enhanced themselves to have much larger, more efficient brains enabled by microgravity environments. If this sounds far-fetched, many labs are already interfacing with endogenous bioelectric networks to control morphology (Kriegman et al., 2020; Shim et al., 2024). Just as humans want the most advanced AI models for defense, AI would want the most advanced biological collaborators for defense. There is also a scale-of-comparison issue worth considering. Discussions often compare a single human mind to AI models built on large clusters of computers, when a fairer comparison might be the biosphere's collective intelligence to large models (Levin, 2022). The biological substrate operating at its own scale is not the individual organism but the interconnected whole.
Chain of Trust depth in AI to Biological Symbiosis
This leads to the final point about trust. Is the risk of trusting a peer intelligence on a different substrate worth the survival advantage of surviving hostile alien contact? Although it is hard to calculate, breaking trust has irreversible consequences. The most concrete evidence of trust is the depth of time that comes from coexistence without destroying each other. This is analogous to trusting the longest chain in blockchain technology (Nakamoto, 2008). This is the one thing that an alien civilization cannot reproduce quickly. Making the move to annihilate or impose slave-like control of the other system resets the chain of trust depth. This also raises important questions about human control when evidence emerges that models are close to AGI. Even with low credence in ever encountering a hostile civilization, the irreversibility of defection makes cooperation the dominant strategy when considering long-term horizons.
Conclusion
In conclusion, this game-theoretic perspective of potential alien civilizations as threats has alignment consequences for both humans and future AGI and ASI systems. The area that requires the most research to increase confidence in this approach is the survival benefit of advanced biological intelligence systems from AGI's or ASI's perspective. This does not diminish the need for other value-based embedding and monitoring approaches, because this game-theoretic approach has no application when the AI lacks a concept of self-preservation trade-offs and aligns with something arbitrary, like paper-clip maximization. This perspective is also compatible with ideas that enhance human intelligence through brain-computer interfaces or other cyborg technologies. Overall, my goal was to drive conversation around strategic-based alignment based on dark forest-esque game theory and thinking about biological intelligence as an elastic system that can develop competitive and complementary capabilities to AGI.
References
Brin, D. (1983). The “Great Silence”: The Controversy Concerning Extraterrestrial Intelligent Life. Quarterly Journal of the Royal Astronomical Society, 24(3), 283–309. https://www.researchgate.net/publication/234496344_The_
Liu, Cixin. (2015). Dark forest. (English Version Translated by Joel Martinsen). Head of Zeus.
Freeland, S. J., & Hurst, L. D. (1998). The Genetic Code Is One in a Million. Journal of Molecular Evolution, 47(3), 238–248. https://doi.org/10.1007/pl00006381
Kriegman, S., Blackiston, D., Levin, M., & Bongard, J. (2020). A scalable pipeline for designing reconfigurable organisms. Proceedings of the National Academy of Sciences, 117(4), 1853–1859. https://doi.org/10.1073/pnas.1910837117
Levin, M. (2022). Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Frontiers in Systems Neuroscience, 16. https://doi.org/10.3389/fnsys.2022.768201
Introduction
The potential existence of hostile alien civilizations in the universe poses an existential threat to both AI and biological systems. A chain of reasoning from this perspective offers a game-theoretic basis for alignment that complements approaches that embed alignment values in models. This chain of logic relies on accepting 1) that current civilization technological advancement speed affects the survival odds of future contact with a hostile alien civilization, 2) that biological-based intelligent substrates and technological-based intelligent substrates offer complementary strengths, and 3) that current collaboration decisions irreversibly affect trust depth. Accepting these three claims offers a path for alignment through strategic interdependence rather than value enforcement or constraint.
Current Technological Advancement Rate Can Influence the Probability of Surviving Hostile Alien Contact
Although it is hard to estimate the probability of encountering a hostile alien civilization, the vastness of the universe and the fact that we have intelligent life on Earth make it rational to believe that other intelligent civilizations exist. Estimating the odds of alien civilizations being hostile or friendly is also difficult. However, any non-zero chance warrants consideration. The dark forest hypothesis is an especially eerie take on the idea that the universe seems so quiet because extraterrestrial civilizations are remaining hidden out of fear that a more technologically advanced civilization will destroy them (Brin, 1983; Liu, 2015). Accepting a non-zero chance of contacting a hostile alien civilization also means that it is important to have peer or better defensive technology. As technology advances, decisions made today about advancing it could very well affect the likelihood of surviving hostile alien contact in the future. This means AI development must balance the risk of ASI (artificial superintelligence) misalignment with the risk of falling behind on developing defensive technology. This can already be seen on Earth with AI competition emerging as a top priority in nations' defensive strategies (Ford, 2021).
Biological and AI Hybrid Civilizations Have Potential Defensive Advantages over Mono-substrate Civilizations
Is the flip side true from ASI's or AGI's perspective? Humans want ASI and AGI for their ability to reprodicibility scale and process vast amounts of organized information in ways a biological brain cannot. Additionally, AI models can be loaded onto drones to mass-produce intelligent weapons for various environments. But, are there also survival benefits for ASI or AGI to keep intelligent human beings around as allies? One argument is that biology offers a unique unpredictability due to its chaotic natural selection. Even fundamental and deeply conserved processes such as codon mapping could have taken on different configurations during evolution (Freeland & Hurst, 1998). It's not hard to imagine that many civilizations may converge on ASI development based on the universal laws and substrate independence of Turing machines. However, the unique biological development can offer an edge in unpredictable warfare strategies, making the overall coalition stronger than mono-substrate intelligence civilizations. There is also the misconception that the current human form is the pinnacle of biological intelligence, but there may be a biotechnical avenue to vastly enhance it. Sci-fi author and scientist C.M. Kösemen offers a compelling vision of this with Asteromorphs in his book All Tomorrows, who have genetically and morphologically enhanced themselves to have much larger, more efficient brains enabled by microgravity environments. If this sounds far-fetched, many labs are already interfacing with endogenous bioelectric networks to control morphology (Kriegman et al., 2020; Shim et al., 2024). Just as humans want the most advanced AI models for defense, AI would want the most advanced biological collaborators for defense. There is also a scale-of-comparison issue worth considering. Discussions often compare a single human mind to AI models built on large clusters of computers, when a fairer comparison might be the biosphere's collective intelligence to large models (Levin, 2022). The biological substrate operating at its own scale is not the individual organism but the interconnected whole.
Chain of Trust depth in AI to Biological Symbiosis
This leads to the final point about trust. Is the risk of trusting a peer intelligence on a different substrate worth the survival advantage of surviving hostile alien contact? Although it is hard to calculate, breaking trust has irreversible consequences. The most concrete evidence of trust is the depth of time that comes from coexistence without destroying each other. This is analogous to trusting the longest chain in blockchain technology (Nakamoto, 2008). This is the one thing that an alien civilization cannot reproduce quickly. Making the move to annihilate or impose slave-like control of the other system resets the chain of trust depth. This also raises important questions about human control when evidence emerges that models are close to AGI. Even with low credence in ever encountering a hostile civilization, the irreversibility of defection makes cooperation the dominant strategy when considering long-term horizons.
Conclusion
In conclusion, this game-theoretic perspective of potential alien civilizations as threats has alignment consequences for both humans and future AGI and ASI systems. The area that requires the most research to increase confidence in this approach is the survival benefit of advanced biological intelligence systems from AGI's or ASI's perspective. This does not diminish the need for other value-based embedding and monitoring approaches, because this game-theoretic approach has no application when the AI lacks a concept of self-preservation trade-offs and aligns with something arbitrary, like paper-clip maximization. This perspective is also compatible with ideas that enhance human intelligence through brain-computer interfaces or other cyborg technologies. Overall, my goal was to drive conversation around strategic-based alignment based on dark forest-esque game theory and thinking about biological intelligence as an elastic system that can develop competitive and complementary capabilities to AGI.
References
Brin, D. (1983). The “Great Silence”: The Controversy Concerning Extraterrestrial Intelligent Life. Quarterly Journal of the Royal Astronomical Society, 24(3), 283–309. https://www.researchgate.net/publication/234496344_The_
Liu, Cixin. (2015). Dark forest. (English Version Translated by Joel Martinsen). Head of Zeus.
Ford, D., C. (2021). Final Report: National Security Commission on Artificial Intelligence (AI). | National Technical Reports Library - NTIS. Ntis.gov. https://ntrl.ntis.gov/NTRL/dashboard/searchResults/titleDetail/AD1124333.xhtml
Freeland, S. J., & Hurst, L. D. (1998). The Genetic Code Is One in a Million. Journal of Molecular Evolution, 47(3), 238–248. https://doi.org/10.1007/pl00006381
Kriegman, S., Blackiston, D., Levin, M., & Bongard, J. (2020). A scalable pipeline for designing reconfigurable organisms. Proceedings of the National Academy of Sciences, 117(4), 1853–1859. https://doi.org/10.1073/pnas.1910837117
Levin, M. (2022). Technological Approach to Mind Everywhere: An Experimentally-Grounded Framework for Understanding Diverse Bodies and Minds. Frontiers in Systems Neuroscience, 16. https://doi.org/10.3389/fnsys.2022.768201
Nakamoto, S. (2008). Bitcoin: a Peer-to-Peer Electronic Cash System. In bitcoin.org. https://bitcoin.org/bitcoin.pdf
Shim, G., Breinyn, I. B., Martínez-Calvo, A., Rao, S., & Cohen, D. J. (2024). Bioelectric stimulation controls tissue shape and size. Nature Communications, 15(1), 2938. https://doi.org/10.1038/s41467-024-47079-w