Hiroshi Yamakawa

Hiroshi Yamakawa is a chairperson of The Whole Brain Architecture Initiative (WBAI), a non-profit organization,  director of the AI Alignment Network (ALIGN)  and a principal researcher at the Graduate School of Engineering of The University of Tokyo.  He is an AI researcher interested in the brain. His specialty includes brain-inspired artificial general intelligence, concept formation, neurocomputing, and opinion aggregation technology. He is a former Chief Editor of the Japanese Society for Artificial Intelligence.   He received an MS in physics and a PhD in engineering from the University of Tokyo in 1989 and 1992, respectively. He joined Fujitsu Laboratories Ltd. in 1992.  He founded Dwango AI Laboratory in 2014 and was a director until March 2019. He was co-founder of WBAI in 2015 and became a chairperson of it.  He is also a visiting professor at the Graduate School of the University of Electro-Communications, the Director of the Intelligent Systems Division (visiting professor) at the Institute of Informatics, Kinki University, and a chief visiting researcher at the RIKEN Center for Biosystems  Dynamics Research.  

Wiki Contributions

Comments

Sorted by
Answer by Hiroshi Yamakawa52

Including the above discussion, I have considered the reasons why Japan tends to be passive in AI X-Risk discussions.

Cultural Factors

  • Culture aiming for coexistence and co-prosperity with AI: Influenced by polytheistic worldviews and AI-friendly anime, there's an optimistic tendency to view AI as a cooperative entity rather than an adversary, leading to underestimation of risks.
  • Suppression of risk identification due to "sontaku" (anticipatory obedience) culture: The tendency to refrain from dissent by anticipating superiors' or organizations' intentions hinders X-Risk discussions.
  • Preference for contextual approaches over abstract discussions: Favoring discussions based on specific situations (e.g., setting regulations) makes it difficult to engage in abstract X-Risk discussions (strategic level).
  • Agile governance: Emphasizing flexibility in responses often leads to delayed measures against long-term X-Risks.

Cognitive and Psychological Factors

  • Lack of awareness regarding AGI feasibility: Insufficient understanding of AI technology's progress speed and potential impact.
  • Psychological barrier to excessively large risks: The enormous scale of X-Risks makes it challenging to perceive them as realistic problems.

International Factors

  • Language barrier: Access to AI X-Risk discussions is limited as they are primarily conducted in English.
  • Low expectations: Insufficient presence both technologically and in risk strategy leads to low expectations from the international community.