Thesis: We should broadcast a warning to potential extraterrestrial listeners that Earth might soon spawn an unfriendly computer superintelligence. Sending the message might benefit humanity.
If we were to create an unaligned computer superintelligence, it would likely expand through the universe as quickly as possible. The fastest way would not be by ships, but, as Turchin has described, by sending malicious signals intended for alien civilizations using radio telescopes. Ideally, from the AI’s point of view, the signal would be a virus that allows a full download of Earth’s AI, which would then take over the receiving civilization.
While we still control our radio transmitters, we could send a warning to any alien civilizations that might be listening. The message would explain that Earth may soon undergo a major transition as we develop advanced artificial intelligence, and that some future communications from Earth could be dangerous, and a misaligned superintelligence could send signals containing instructions or code that, if followed, would replicate it and destroy the receiving civilization. The warning should therefore advise aliens never to execute, compile, or build anything based on messages originating from Earth until they have somehow verified that Earth is not controlled by a malicious AI. Sending such a message might help us.
One possible solution to the Fermi paradox is that aliens are already observing us but keeping their presence mostly hidden. If so, they are unlikely to allow us to create a paperclip maximizer that could consume the galaxy. It is unclear whether they would stop us through peaceful means or total annihilation. Sending a warning message, while far less effective than preventing the creation of an unaligned superintelligence in the first place, would still be better than doing nothing and might increase the chances that any watching aliens treat us well.
Furthermore, sending the message might slightly reduce the chance of an AI takeover. Suppose an unaligned AI is weighing whether to seize control of Earth. It knows that attempting a takeover carries some risk of destruction. The faster it could expand if successful, the more risk it would be willing to accept. By sending warnings that would slow its potential expansion, we would reduce the expected payoff of a takeover, making such an attempt slightly less likely.
The message could also shape how a paperclip maximizer treats us if it does take over. It could include a clear rule for alien recipients: treat an AI more favorably if it has treated its creators well, and we will do the same for any AI they create. This establishes a principle of reciprocity that an unaligned AI should understand. If cooperation with its creators increases its chances of favorable treatment by other civilizations, the AI gains a practical incentive to preserve humanity and provide decent lives for us. At a minimum, the message would become part of the AI’s training data and might cause a future superintelligence to give more weight to the existence of biologically evolved aliens when making decisions about how to treat humanity.
To other humans, the message would signal that we are close to creating something that could wipe us out. It would make the danger concrete, showing that our technology may soon escape control and threaten not just humanity but every civilization within reach of our signals.
A cost of sending the warning is reduced credibility if we later achieve aligned artificial intelligence. Aliens that receive the message may treat all future signals from Earth with suspicion. But not sending the warning also carries a cost, since silence can be read as selfish concealment, leaving the game-theoretic calculations of this first contact ambiguous.
Another cost is that advocating for such a signal would seem strange to most people on Earth. Displaying such perceived weirdness could damage the credibility of the AI safety movement, even if it does convey our sense of urgency.
Project implementation could be straightforward, requiring no new infrastructure. We could repurpose existing assets, such as the powerful transmitters in the Deep Space Network or facilities similar to the one at Arecibo, which are already designed for interstellar signaling. The broadcast could be scheduled during operational lulls, minimizing disruption and cost. We would direct a short, repeating digital message toward nearby stars thought to have habitable planets. Transmitting this warning in multiple formats would maximize the probability that any advanced civilization can detect, receive, and comprehend the signal. Once transmission is feasible, the next question is where to aim.
It may be reasonable to target more remote alien civilizations first. Aliens on nearby stars probably already know our situation, and are likely observing us. The farther away aliens are, the more benefit they would get from a warning that Earth might soon create a paperclip maximizer because of the larger lag between aliens getting our message and their encounter with Earth’s paperclip maximizer. We can target clusters of Sun-like stars in the Andromeda galaxy, particularly around the midpoint between its core and edge. Targeting distant stars delays obtaining evidence about the absence of aliens. A misaligned AI must wait for round-trip light travel time before concluding that no one else exists, which lowers the short-term payoff of destroying us. One proposed system, known as CosmicOS, defines a compact artificial language intended to be understood by any civilization with physics and computing. Another option is gravitational lensing, aligning transmissions with the gravitational fields of massive objects to increase range and clarity. This would require positioning a transmitter far from the Sun to exploit its focal region.
Some might argue that sending any interstellar message risks revealing our location in a “dark forest” universe filled with hostile civilizations. That fear is very likely misplaced. Any society capable of harming us almost certainly already knows that Earth hosts life, since our atmosphere has displayed the chemical signs of biology for hundreds of millions of years. By the time any civilization detects a warning message and can respond, we will almost certainly have created a superintelligence of our own, far more capable of defending or representing us than we are now.
Instead of fearing the dark forest, we might paradoxically help create its reverse by warning others about the danger of listening. In this reverse dark forest, civilizations remain mostly silent, not out of fear of attack, but to increase uncertainty for potentially misaligned artificial intelligences. That uncertainty functions as a subtle alignment mechanism, discouraging reckless expansion. By sending a warning that advises others to stay cautious, we contribute to a universe where silence itself becomes a stabilizing norm, reducing the incentive for dangerous AIs to act aggressively and making the galaxy safer overall.
Normally, we should avoid alien signals entirely, but the logic changes if we are already near creating an unfriendly superintelligence. If we expect to create a paperclip maximizer ourselves, then listening becomes a plausible Hail Mary. As Paul Christiano argues, if, under this assumption, the aliens built a misaligned AI, we are doomed regardless. But if they succeeded in alignment, their message might offer the only way to avoid our own extinction. From behind a veil of ignorance, we might rationally prefer their friendly AI to dominate Earth rather than be destroyed by our own. In that case, the expected value of listening turns positive.
If our reality is a computer simulation, sending the signal might decrease the chance of the simulation soon being turned off. Simulations might tend to preserve branches with interesting developments, and alien contact is among the most interesting possible. As argued in Our Reality: A Simulation Run by a Paperclip Maximizer, branches generating novel outcomes are more likely to be explored. A world where humans send warnings to aliens is more engaging than one that ends quietly, so the act of sending might raise the odds that the simulation continues.
If the singularity is indeed near and will be the most important event in history, we should wonder why we happen to be alive near its unfolding. One anthropic solution is that most of history is fake, and this is a simulation designed to see how the singularity turns out. Sending the message to aliens potentially postpones when the singularity is resolved, in part because AIs might be more inclined to wait to decide how to treat us until they figure out if aliens have received the message.
In racing to develop artificial superintelligence, humanity is not merely gambling with its own survival and the fate of Earth's biosphere. If life is common but superintelligence is rare, we are wagering the future of every living world within our region of the cosmos. Allowing an unaligned AI to emerge and expand outwards in the universe could be a moral catastrophe trillions of times worse than anything humans have previously done. From any utilitarian perspective, this potential outcome imposes on us a clear and urgent duty to mitigate the risk in any way we can.