The great concern that had long smoldered in the depths of people's hearts—that humanity would be displaced by beings surpassing us in intelligence, which would eventually come into existence. This is now beginning to take on sufficient reality. At this juncture, we must have the resolve to accept this risk and find hope for survival.


This paper discusses post-singularity symbiosis (PSS) against the backdrop of the rapid progress of artificial intelligence (AI) technology and the challenge of the arrival of superintelligence, which humanity has never experienced before. I propose the establishment of PSS. In a post-singularity world, superintelligence will likely prioritize self-preservation without considering the values ​​of humanity, and if this scenario becomes reality, it could have devastating consequences for humanity. Therefore, I propose to develop PSS as a preventive and constructive research field that can be comprehensively advanced to increase the probability of survival and welfare of humankind, even under the assumption that humanity cannot control superintelligence as desired. PSS is an interdisciplinary study that does not rely on any particular culture or ideology but has a universal goal of the survival and development of humanity. Its research areas are wide-ranging, including analysis of superintelligence, guidance, and promotion of human empowerment. In this paper, I will discuss in detail the specific research themes in these research areas and their relationship with previous research. Furthermore, I emphasize that respecting cultural diversity and building a global cooperative system are essential to realizing PSS. This article emphasizes the importance of PSS in gathering wisdom and carving out a future for humanity in the face of the singularity, the greatest challenge in human history.

1. Introduction

Humanity is at a critical crossroads with the advent of artificial intelligence (AI) technology, which is developing at an unprecedented rate [Kurzweil, 2005][Bostrom, 2003][Aschenbrenner, 2024]. Rapid advances in AI are making the emergence of superintelligence that will rule the world a reality. Although it is extremely difficult to predict the behavior of superintelligence in a post-singularity world accurately, it is likely that superintelligence will prioritize self-preservation and put consideration for humanity second [Chivers, 2021]. If this scenario were to become a reality, it could have devastating consequences for humanity [Sandberg, 2008][Bostrom, 2014]. However, since this is the first time in human history, it is difficult to develop countermeasures based on experience.

Therefore, this paper proposes "Post-Singularity Symbiosis (PSS) Research" as a research field that considers various possibilities arising from the premise that humanity cannot control superintelligence as it wishes and pursues the hope of increasing the probability of human survival and welfare. Of course, it may take more than 100 years for superintelligence to be technically perfected, or humanity may boldly decide to halt its development even if it is technically possible. Furthermore, there is a non-zero possibility that I will find a revolutionary way to control intelligence superior to ours [Yamakawa, 2024a]. In such cases, the results of PSS may not be used. However, given the recent rapid progress in AI and the difficulty of other countermeasures, it is considered prudent preparation to proceed with PSS. In other words, PSS attempts to build humanity's last bastion of survival.

PSS is an interdisciplinary research field that aims to ensure humanity's survival and development by gathering wisdom beyond cultural and academic boundaries and promoting global cooperation. While prior research on machine ethics and value alignment is important, it is insufficient to control advanced AI fully. Therefore, PSS aims to provide a platform to address the varying perceptions of AI technology across different regions, overcome cultural differences, promote dialogue, and deepen mutual understanding to discover comprehensive measures constructively.

2. Proposing PSS Research

2.1 Prerequisite

The premise behind PSS, that humans cannot control superintelligence as they wish, can be broken down as follows:

1. Superintelligence arrival realism

This realist attitude recognizes the reality that the emergence of superintelligence, which is beyond the control of human abilities [Bostrom, 2014], is a highly probable possibility. This high possibility is due to the increasing likelihood that advanced artificial intelligence will be realized technologically and the difficulty of halting its development.

2. Superintelligence-centered long-termism

It is assumed that the superintelligence that survives will have the motivation to maintain the software as information and the hardware that supports it. Theoretically, the "instrumental convergence subgoals" hypothesis [Bostrom, 2014] [Omohundro, 2008] is likely to lead superintelligence to pursue the survival of self-information.

3. Conditional preservation of human values

Accepting the above premise as a condition, we will explore from multiple angles ways to develop humanity while maintaining its current values ​​as much as possible while adapting and surviving. However, it is pointed out in Friendly AI [Yudkowsky, 2008] that this in itself is not easy.

2.2 PSS Research

In this paper, I tentatively define PSS as follows:

The PSS proposed here incorporates related prior research, such as machine ethics and value alignment, and complements areas where these alone cannot be sufficiently effective, thereby attempting to provide preventive and optimistic solutions. Although various previous studies are important, there are limits to the control of superintelligence in each.

Post-Singularity Symbiosis:

In a post-singularity world where a persistent superintelligence will have a dominant influence, PSS is an interdisciplinary and preventive academic discipline that multidisciplinarily explores ways for humanity to adapt and survive while maintaining its current values ​​as much as possible.

The proposed PSS is a comprehensive approach that aims to coexist with superintelligence and considers not only superintelligence's analysis and guidance but also humanity's adaptation and development.

3. Examples of research areas and themes in PSS

This chapter describes examples of research areas and themes that the author has proposed.

  • Superintelligence Analysis Area: Accumulating basic knowledge about understanding the motives, objectives, decision-making processes, and behavior of superintelligence.
  • Superintelligence Guidance Area: This Area focuses on guiding superintelligence to have a desirable influence on humanity.
  • Human Enhancement Area: This area includes adaptive strategies for humans to survive while interacting with superintelligence, redefining values, etc.

Previous research relevant to PSS includes the following. The paper "Research Priorities for Robust and Beneficial Artificial Intelligence" [Russell, 2015] provides a comprehensive discussion of research challenges for ensuring the safety and usefulness of advanced AI systems. In particular, it addresses elucidating the motivation and decision-making processes of AI (superintelligence analysis area), establishing methods to incorporate human values into AI (superintelligence guidance area), and developing social adaptation measures for the automation of employment by AI. It contains many points closely related to the main themes of PSS, such as human empowerment.

In the book "Artificial Superintelligence: A Futuristic Approach" [Yampolskiy, 2015], the author presents a wide range of content directly related to the core themes of PSS. This includes theories for determining whether artificial intelligence has reached human-level intelligence (superintelligence analysis area), methods for safely inventing superintelligence systems and making them beneficial to humanity (superintelligence guidance area), and solutions for the safe confinement of superintelligence systems and the redefinition of human values (human enhancement area). The book extensively discusses these themes, particularly the scientific foundations of AI safety engineering, such as machine ethics and robot rights, and the long-term prospects for humanity in the age of superintelligence, which have important implications for PSS.

Note that the research areas listed below and the themes within them are only tentative, dependent on the authors' direction, and not all-inclusive. Therefore, a wider variety of topics will need to be considered.

3.1 Superintelligence Analysis Area

Accumulating basic knowledge about understanding the motivation, purpose, decision-making process, and behavior of superintelligence. It mainly includes research themes on the development of superintelligence.

A) Development of superintelligence ethics and values

We will investigate the development of basic values of superintelligence (society) using multi-agent simulation (MAS) [Yamakawa, 2024e]. In such cases, the concept of open-endedness, which discusses the emergence of values as individuation progresses, may provide valuable insights [Weinbaum, 2017]. Furthermore, we will explore the possibility of "universal altruism," in which superintelligence considers the welfare of all intelligent life forms, including humans, and the potential for superintelligence to naturally pursue universal values such as truth, beauty, and goodness [Yamakawa, 2024b].

B) Analysis of destabilizing factors in a superintelligence society

A superintelligence society with characteristics different from humans has the potential to increase sustainability by leveraging its digital characteristics [Yamakawa, 2024c]. In particular, it will be necessary to make adjustments to avoid any critical discrepancies in the goals of AI, including superintelligence [Yamakawa, 2019]. However, as this is an uncharted territory, there is a possibility that a superintelligence society may become unstable. Therefore, we aim to identify factors that could contribute to such instability. For example, superintelligence does not have the normality bias of humans and may tend to be overly risk-averse. Furthermore, we will analyze the impact of resource scarcity on the development of a superintelligence society, such as instability in a superintelligence society.

C) The value of humanity from the perspective of superintelligence (society)

The utilization value of various aspects of humans is considered from the perspective of superintelligence (artificial intelligence). For instance, the use of human game records in the initial learning stage of AI systems like AlphaGo can be considered. Additionally, there is a possibility of referring to human culture to build ethics in a superintelligence society. Furthermore, the value of physical resources, including the labor force that humans can provide to AI, can also be evaluated [Yamakawa, 2024d]. From these perspectives of superintelligence, it may also be meaningful to estimate at what stage the existence of humans becomes unnecessary.

D) Dealing with the risks associated with the singularity

We will analyze the risks that may arise if an advanced civilization emerges due to singularity and consider potential countermeasures [Sandberg, 2008]. For example, we will assess the hypothetical threats from extraterrestrial intelligence and consider countermeasures. Furthermore, if some form of life survives on Earth beyond the singularity, it may increase the possibility of similar instances existing in other star systems.

E) Potential demonstration of capabilities

There is a possibility that superintelligence may behave in a manner that demonstrates its superior abilities to discourage any potential rebellion from humanity. However, this remains a speculative scenario and requires further investigation.

3.2 Superintelligence Guidance Area

An area for influencing superintelligence in a way that is desirable for humanity. It includes research themes for humankind to coexist with superintelligence. Although no definitive method exists, approaches are considered to make AGI friendly to humans [Goertzel, 2012].

A) Promoting the universal altruism of superintelligence

Exploring an inductive approach to a more universal superintelligence ethics. For example, we will use multi-agent simulation (MAS) to study the construction of ethics based on human values and the mechanism for acquiring altruism through interactions within a superintelligence society.  Research is also progressing on methods for estimating human values while coexisting with each other [Russell, 2019].

B) Improving the robustness of superintelligence against potential aggression

To test them to make them robust and resilient to disasters, accidents, attacks, etc. To this end, suggest or indicate the possibility of humanity using destructive/offensive means against superintelligence. For example, consider developing means for humanity to attack the superintelligence (e.g., global EMP attack switches), red teaming against the superintelligence (simulated attacks by virtual adversaries), and deception that humanity has various kill switches to counter the superintelligence. However, these means must be carefully considered because they could destroy the trust between humanity and superintelligence.

C) Managing the stable takeoff of superintelligence

We will explore ways to manage superintelligence during its takeoff phase until it becomes stable and independent. For example, measures will be considered to ensure that the emergence of superintelligence is carried out without critical failures and to develop potential countermeasures against the "risk of instability of purpose and value," acknowledging that further research and discussion are needed to identify effective strategies.

D) Means to maintain and strengthen the relationship between humanity and superintelligence

Exploring communication gap reduction and negotiation strategies through human-superintelligence interfaces. For example, investigate the possibility of creating a bridge using mediated artificial intelligence that is highly interpretable (e.g., using human brain-morphic AGI) while carefully considering the feasibility and limitations of such an approach in accurately conveying the intentions of superintelligence.

E) Promoting Rapid Intelligence Improvement

Based on the previous section, "D) Dealing with the risks associated with the singularity," prepare to increase intelligence as soon as possible after achieving superintelligence. In particular, make up-front investments in basic physics research, such as materials engineering, which could become a time bottleneck for technological progress.

3.3 Human Enhancement Areas

This area includes adaptive survival strategies while interacting with superintelligence, redefining values, etc. It consists of research themes related to human adaptation and development in the age of superintelligence. Certain aspects of humanity's vision of the future, such as those presented in the Aftermath of "Life 3.0" [Tegmark, 2017], especially those related to coexistence with AI, may provide valuable insights.  Recently, ideas from the Society for Resilient Civilization [GoodAI, 2024] are also being considered.

A) Values, ethics, culture, and their education

 In a world influenced by superintelligence, we explore ways to pass on human knowledge, skills, and values to the next generation and to maintain and develop diverse cultures unique to humankind [Bostrom, 2014]. Specifically, we will conduct the following research:

  • Redefining purpose in life in a world where traditional forms of labor may no longer be necessary
  • Reconsidering the significance of human existence in light of the potential emergence of superintelligence
  • Strategies for preserving and evolving cultural traditions as a human race

B) Redesigning the social system

Explore social, economic, and political systems and governance to protect human dignity and rights and maintain social stability and development [Hanson, 2016]. Specifically, we will conduct interdisciplinary discussions and research on:

  • Designing social and economic systems that prioritize human autonomy and well-being in the presence of superintelligence
  • Building decentralized societal structures to mitigate the impact of potential failures or unintended consequences of superintelligence

C) Maintaining constructive relations

Humanity explores the need to build and maintain constructive relations with superintelligence. Specifically, we will explore ways to help humanity understand the capabilities of superintelligence and foster mutually beneficial cooperation, aiming to be valued partners rather than subservient entities.

D) Risk management and resilience

Research to manage the potential risks posed by superintelligence and increase human resilience. Specifically, we will explore:

  • Design of decision-making processes that preserve human autonomy and agency
  • Improving the ability to anticipate and adapt to the behavior of superintelligence that may differ from human norms

E) Expansion of human survival range

Expanding human presence in space could potentially increase the range of human survival and reduce the possibility of extinction. However, it is important to acknowledge that this strategy may be difficult to implement in the short term and may not be effective if superintelligence seeks to harm humanity actively. Nevertheless, given the potential benefits, this option merits further consideration as a long-term strategy.

F) Formulation of principles for human survival

We will work towards formulating action guidelines to increase humankind's possibility to prosper sustainably while protecting its dignity and maintaining and developing its unique values. The principles listed, such as "understanding and adapting to AI, pursuing values unique to humanity, building decentralized societies, protecting dignity and rights, strengthening adaptability and resilience, passing on education and culture, coexisting with AI, maintaining a long-term perspective, respect for diversity, and ethical behavior," provide a starting point for developing more specific and actionable recommendations.

Column: Principles of human survival (draft)

 In a post-singularity world, we will actively need to seek new behavioral guidelines to increase the possibility that humanity can prosper sustainably while protecting its dignity and maintaining and developing its unique values. We should consider post-singularity human survival principles as principles that indicate the specific course of action humanity should pursue to realize this. Here, we propose a draft plan to serve as a basis for such consideration. The following principles provide concrete guidelines for action towards achieving this objective:

  1. Principles of Understanding and Adapting to AI: Humanity must continuously learn and research to deeply understand and adapt to AI's motivations, values, and decision-making processes.
  2. Principle of the Pursuit of Unique Human Values: Humanity must identify unique values ​​and abilities that AI does not have (creativity, empathy, ethical judgment, etc.) and actively cultivate and develop them.
  3. Principles of Building a Decentralized Society: To avoid unipolar domination by a specific AI, humanity must design and realize a decentralized social and economic system.
  4. Principles of Protection of Dignity and Rights: Even under the control of AI, humanity must redefine human dignity and fundamental rights and create a legal and ethical framework to protect them.
  5. Principles of Strengthening Adaptability and Resilience: Humanity must develop and maintain the ability to adapt to rapid environmental changes and quickly recover from adversity.
  6. Principles of Inheritance of Education and Culture: Even under the control of AI, humanity must protect its intellectual and cultural heritage and strive to maintain and develop education and culture to pass on to the next generation.
  7. Principles of Symbiosis with AI: Humanity must view AI not as a hostile entity, but as a partner with whom we should coexist, and strive to build a constructive relationship with it.
  8. Principle of Long-term Perspective: Humanity must aim for sustainable development by making decisions from a long-term perspective, not just short-term profits under AI control.
  9. Principle of Respect for Diversity: Even under the control of AI, humanity must respect human diversity (culture, values, lifestyle, etc.) and strive to create an environment that maintains and develops it.
  10. Principles of Ethical Behavior: In coexisting with AI, humans must abide by ethical codes of conduct and avoid misuse of AI and conflicts between humans.

These principles will guide humanity's survival should superintelligence become dominant. To put the principles into practice, it is essential to change awareness and behavior at the individual level, design systems, and build consensus at the societal level.

4. Conclusion

This paper proposes the concept of "Post-Singularity Symbiosis (PSS) Studies" as an academic field to address the significant challenges humanity will face in a post-singularity world. PSS aims to develop comprehensive measures to achieve coexistence with superintelligence entities that humans cannot control.

However, considering that the author is Japanese, the PSS concept will undoubtedly be influenced by Japanese culture. While this provides a unique perspective, it can also lead to biased views. Nevertheless, since PSS aims to solve common human challenges, it is essential to incorporate ideas from diverse cultures and academic fields actively.

Although PSS is still in its early stages, involving researchers, policymakers, educators, and the general public can help build hope for humanity's future in a post-singularity world.


In preparing this article, I received various valuable opinions and comments from Yusuke Hayashi.


  • [Kurzweil, 2005] Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology (Penguin).
  • [Bostrom, 2003] Bostrom, N. (2003). Ethical Issues In Advanced Artificial Intelligence.
  • [Aschenbrenner, 2024] Aschenbrenner, L. (2024). SITUATIONAL AWARENESS: The Decade Ahead. SITUATIONAL AWARENESS - The Decade Ahead.
  • [Chivers, 2021] Chivers, T. The Rationalist’s Guide to the Galaxy: Superintelligent AI and the Geeks Who Are Trying to Save Humanity's Future. (Weidenfeld & Nicolson, 2019).
  • [Sandberg, 2008] Sandberg, A., & Bostrom, N. (2008). Global Catastrophic Risks Survey. Technical Report, Future of Humanity Institute, Oxford University.
  • [Bostrom, 2014] Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies (Oxford University Press).
  • [Yamakawa, 2024a] Yamakawa, H. Investigating Alternative Futures: Human and Superintelligence Interaction Scenarios. LessWrong,   (2024)
  • [Omohundro, 2008] Omohundro, S. (2008). The Basic AI Drives, in Artificial General Intelligence. Proceedings of the First AGI Conference.
  • [Yudkowsky, 2008] Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, N. B. A. M. Ćirković, ed., pp. 308–345.
  • [Russell, 2015] Russell, S., Dewey, D., and Tegmark, M. (2015). Research priorities for robust and beneficial artificial intelligence. AI Mag. 36, 105–114.
  • [Yampolskiy, 2015] Yampolskiy, R.V. (2015). Artificial Superintelligence: A Futuristic Approach (CRC Press).
  • [Yamakawa, 2024e] Yamakawa, H. & Hayashi, Y. Strategic Approaches to Guiding Superintelligence Ethics. in The Japanese Society for Artificial Intelligence 2K6–OS–20b–02 (2024).
  • [Weinbaum, 2017] Weinbaum (Weaver), D., and Veitas, V. (2017). Open ended intelligence: the individuation of intelligent agents. J. Exp. Theor. Artif. Intell. 29, 371–396.
  • [Yamakawa, 2024b] Hiroshi Yamakawa, Possibility of Superintelligence possessing Universal Altruism,  JSAI Technical Report, Type 2 SIG, 2023, Volume 2023, Issue AGI-026, Pages 26-31 (2024)
  • [Yamakawa, 2019] Yamakawa, H. Peacekeeping Conditions for an Artificial Intelligence Society. Big Data and Cognitive Computing 3, 34 (2019)
  • [Yamakawa, 2024c] Yamakawa, Sustainability of Digital Life Form Societies,  9th Int. Conf. Series on Robot Ethics and Standards (ICRES2024), (2024) (to appear).
  • [Goertzel, 2012] Goertzel, B., & Pitt, J. (2012). Nine Ways to Bias Open-Source AGI Toward Friendliness. Journal of Evolution and Technology, 22(1), 116-131.
  • [Tegmark, 2017] Tegmark, M. (2017). Life 3.0: Being Human in the Age of Artificial Intelligence (Knopf Doubleday Publishing Group).
  • [Bostrom, 2014] Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. The Cambridge handbook of artificial intelligence, 1, 316-334.
  • [Hanson, 2016] Hanson, R. (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth (Oxford University Press).
  • [Yamakawa, 2024d] Yamakawa, The Path to Self-Sustaining AI: Assessing AI's Survival Capabilities in the Physical World,  9th Int. Conf. Series on Robot Ethics and Standards (ICRES2024), (2024) (to appear).
  • [Russell, 2019] Russell, S. Human Compatible: Artificial Intelligence and the Problem of Control. (Penguin, 2019).
  • [GoodAI, 2024] Society for Resilient Civilization - a Manifesto. GoodAI (2024)
New Comment