This is a special post for quick takes by Super AGI. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
3 comments, sorted by Click to highlight new comments since:

Is this proof that only intelligent life favors self preservation?

Joseph Jacks' argument here at 50:08 is: 

1) If Humans let Super Intelligences do "whatever they want", they won't try to kill all the Humans (because, they're automatically nice?) 

2) If Humans make any (even feeble) attempts to protect themselves from Super Intelligences, then the Super Intelligences can and will will have reason to try to kill all the Humans. 

3) Human should definitely build Super Intelligences and let them do whatever they want... what could go wrong? yolo! 




P. If humans try to restrict the behavior of a superintelligence, then the superintelligence will have a reason to kill all humans.


Ah yes, the second part of Jacks' argument as I presented it was a bit hyperbolic.  (Though, I feel the point stands: he seems to suggest that any attempt to restrict Super Intelligences would "create the conditions for an antagonistic relationship" and will give them a reason to harm Humans). I've updated the post with your suggestion.  Thanks for the review and clarification.


Point 3) is meant to emphasize that:

  • he knows the risk and danger to Humans in creating Super Intelligences without fully understanding their abilities and goals, and yet 
  • he is in favor of building them and giving them free and unfettered access to make any actions in the world that they see fit

This is, of course, an option that Humans could take.  But, the question remains, would this action be likely to allow for acceptable risks to Humans and Human society?  Would this action favor Human's self preservation?