LESSWRONG
LW

AI ControlEvolutionHuman ValuesAI

1

Human Nature, ASI alignment and Extinction

by Ismael Tagle Díaz
20th Jul 2025
1 min read
0

1

This post was rejected for the following reason(s):

  • Not addressing relevant prior discussion. Your post doesn't address or build upon relevant previous discussion of its topic that much of the LessWrong audience is already familiar with. If you're not sure where to find this discussion, feel free to ask in monthly open threads (general one, one for AI). Another form of this is writing a post arguing against a position, but not being clear about who exactly is being argued against, e.g., not linking to anything prior. Linking to existing posts on LessWrong is a great way to show that you are familiar/responding to prior discussion. If you're curious about a topic, try Search or look at our Concepts page.
AI ControlEvolutionHuman ValuesAI

1

New Comment
Moderation Log
More from Ismael Tagle Díaz
View more
Curated and popular this week
0Comments

I was reflecting on the human nature and how our very biological programming might drive us to extinction. According to my limited knowledge about anthropology and human evolution, I feel that most of us are kind of wired to try to look up for our closest relatives and ourselves, but not for humans as a species. So when it comes to developing potentially dangerous technologies such as ASI (Artificial Super Intelligence), I simply cannot see the ones building it, slowing it down, or taking the safety measures required to avoid a misaligned Super Intelligence. I don't see that because of how human nature (supposedly) works. People like Sam Altman, Elon Musk, Donald Trump, and other relevant actors in the ASI development, are (supposedly) more concerned with their own future than with humanity's as a whole. That seems pretty stupid, but as I see it, they, as humans, don't have the instinct to put humankind's well-being before their own.

So, are we doomed? Even if most humans have that tendency, I also thinks that there are certain individuals who are willing to sacrifice power, status, and money to protect humanity. An example is Geoffrey Hinton, Ilya Sutskever, and others, who have left the development of ASI to focus on AI Safety. I think it all depends on how many of these more humankind-conscious actors come across to the mainstage, that we can go on with a path that does not end up in a catastrophe.