I just heard the most disturbing comment in a Lex Fridman interview with Roman Yampolskiy, regarding the dangers of superintelligent AI. Yampolskiy talked about x-risk, existential risk, where everyone´s dead. Then, s-risk, suffering risk, where everyone wishes they were dead. Finally, i-risk, ikigai risks, where humanity has lost its meaning because systems have taken over everything and we have lost all meaning to exist.
I want to focus on x-risk. Yampolskiy says (I ́m paraphrasing) that there are many malevolent actors, people with mental diseases that don't have empathy, that don't have this human quality of understanding suffering. (I can think of a few right now that are on the news everyday. How... (read 1102 more words →)