I hate to rain on your parade, but equal coexistence with artificial superintelligence would require solving the alignment problem, or the control problem. Otherwise it can and eventually will do something we consider catastrophic.
I think there are a few assumptions here that don't quite match what's being proposed. As I understood it, the manifesto implies that we should spend more resources to solve alignment/control. It also doesn't imply that the coexistence would necessarily be equal; Yamakawa-sensei acknowledges that there is limited value humanity could provide to an all-mighty superintelligence. However, the manifesto encourages exploring avenues in which humanity could maintain some aspects of symbiosis or flourishment, even in the case that control fails.
One way to think about it could be as finding alternatives to S-risks in worlds where ASI happens and is not entirely aligned nor misaligned, but somewhere in between. Depending on one's priors, the probability of that happening is not negligible.
In response to the growing risk of uncontrollable advanced AI systems, we are announcing the Japan-initiated Manifesto for Symbiotic Intelligence as a strategic vision that balances a preventative sense of crisis with constructive hope. This manifesto aims to open the path to a sustainable future for humanity through the symbiotic coexistence of diverse forms of intelligence. We urge broad dissemination and immediate endorsement through signatures of support.
AI is advancing at a pace that continues to surprise even leading experts. From an AI safety perspective, the outlook is increasingly concerning. Rapid gains in AI capabilities have exposed the limitations of traditional control methods, and there is a non-negligible risk of catastrophic outcomes for human society.
Yet, hope remains—our collective future hinges on how we act today. We need a strategy that integrates both a preventative sense of crisis and constructive hope. While “guardrails” and “alignment” dominate AI safety discourse in the West, such unilateral control measures may lose effectiveness as AI capabilities become more general and autonomous. Relying solely on these methods could ultimately threaten the very stability of human civilization.
In direct response to this challenge, the newly announced Intelligence Symbiosis Manifesto outlines a long-term vision for a future in which diverse intelligences—including humans and AIs—coexist and pursue well-being on equal terms. Drawing from Japan’s cultural tradition of welcoming non-human entities (such as robots) as companions, this initiative offers a globally distinctive and timely contribution to the discourse. It seeks to accelerate practical exploration and dialogue across research, policy, and industry.
Intelligence Symbiosis Manifesto (Statement for endorsement through signature) |
I believe humanity's most promising route to a sustainable future is to let diverse intelligences—including humans and AIs—live in symbiosis and flourish together, protecting each other's well-being and averting catastrophic risks. |
(See: https://intelligence-symbiosis.info/ )
※ These signature website is scheduled to go live on Thursday, June 12, 2025.
An explanatory note about the Manifesto will also be released on the same day.
To ensure the success of this symbiotic approach, we believe we can promote activities along the following lines:
Humanity now faces the unprecedented threat of uncontrollable AI. Neither blind optimism nor fatalistic pessimism will lead us to a safe future. By presenting the path of symbiosis with diverse forms of intelligence as a realistic and timely strategy, rooted in the spirit of harmony, we hope to offer the world a viable alternative and a renewed sense of hope.
The following early signatories represent a diverse range of fields, including AI safety, science fiction, and ethics.