1 min read19th Mar 20244 comments
This is a special post for quick takes by Ratios. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
4 comments, sorted by Click to highlight new comments since: Today at 1:42 PM
[-]Ratios1mo122

S-risks are barely discussed in LW, is that because:

  • People think they are so improbable that it's not worth mentioning.
  • People are scared to discuss them.
  • Avoiding creating hypersititous textual attractors
  • Other reasons?

Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.

[-]Dagon1mo1-5
  • There's a wide variance in how "suffering" is perceived, weighted, and (dis)valued, and no known resolution to different intuitions about it.  

There's no real agreement on what S-risks even are, and whether they're anything but a tiny subset of other X-risks.

  • Many people care less about (others) suffering than they do about positive-valence experience (of others).  This may or may not be related to the fact that suffering is generally low-status and satisfaction/meaning is high-status.