LESSWRONG
LW

740
Hiroshi Yamakawa
23710
Message
Dialogue
Subscribe

Hiroshi Yamakawa is a chairperson of The Whole Brain Architecture Initiative (WBAI), a non-profit organization,  director of the AI Alignment Network (ALIGN)  and a principal researcher at the Graduate School of Engineering of The University of Tokyo.  He is an AI researcher interested in the brain. His specialty includes brain-inspired artificial general intelligence, concept formation, neurocomputing, and opinion aggregation technology. He is a former Chief Editor of the Japanese Society for Artificial Intelligence.   He received an MS in physics and a PhD in engineering from the University of Tokyo in 1989 and 1992, respectively. He joined Fujitsu Laboratories Ltd. in 1992.  He founded Dwango AI Laboratory in 2014 and was a director until March 2019. He was co-founder of WBAI in 2015 and became a chairperson of it.  He is also a visiting professor at the Graduate School of the University of Electro-Communications, the Director of the Intelligent Systems Division (visiting professor) at the Institute of Informatics, Kinki University, and a chief visiting researcher at the RIKEN Center for Biosystems  Dynamics Research.  

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
How can we promote AI alignment in Japan?
Answer by Hiroshi YamakawaSep 12, 202452

Including the above discussion, I have considered the reasons why Japan tends to be passive in AI X-Risk discussions.

Cultural Factors

  • Culture aiming for coexistence and co-prosperity with AI: Influenced by polytheistic worldviews and AI-friendly anime, there's an optimistic tendency to view AI as a cooperative entity rather than an adversary, leading to underestimation of risks.
  • Suppression of risk identification due to "sontaku" (anticipatory obedience) culture: The tendency to refrain from dissent by anticipating superiors' or organizations' intentions hinders X-Risk discussions.
  • Preference for contextual approaches over abstract discussions: Favoring discussions based on specific situations (e.g., setting regulations) makes it difficult to engage in abstract X-Risk discussions (strategic level).
  • Agile governance: Emphasizing flexibility in responses often leads to delayed measures against long-term X-Risks.

Cognitive and Psychological Factors

  • Lack of awareness regarding AGI feasibility: Insufficient understanding of AI technology's progress speed and potential impact.
  • Psychological barrier to excessively large risks: The enormous scale of X-Risks makes it challenging to perceive them as realistic problems.

International Factors

  • Language barrier: Access to AI X-Risk discussions is limited as they are primarily conducted in English.
  • Low expectations: Insufficient presence both technologically and in risk strategy leads to low expectations from the international community.
Reply
No wikitag contributions to display.
3The Necessity of Studying Emergent Machine Ethics Now
1mo
0
7The Intelligence Symbiosis Manifesto - Toward a Future of Living with AI
3mo
2
-7Reframing AI Safety Through the Lens of Identity Maintenance Framework
5mo
1
-2Proposing Human Survival Strategy based on the NAIA Vision: Toward the Co-evolution of Diverse Intelligences
7mo
0
19Sustainability of Digital Life Form Societies
1y
1
6Proposing the Post-Singularity Symbiotic Researches
1y
1
1Investigating Alternative Futures: Human and Superintelligence Interaction Scenarios
2y
0