797

LESSWRONG
LW

796

Ratios's Shortform

by Ratios
19th Mar 2024
1 min read
4

4

This is a special post for quick takes by Ratios. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Ratios's Shortform
12Ratios
9ChristianKl
1Nate Showell
1Dagon
4 comments, sorted by
top scoring
Click to highlight new comments since: Today at 4:55 PM
[-]Ratios2y122

S-risks are barely discussed in LW, is that because:

  • People think they are so improbable that it's not worth mentioning.
  • People are scared to discuss them.
  • Avoiding creating hypersititous textual attractors
  • Other reasons?
Reply
[-]ChristianKl2y90

See https://web.archive.org/web/20230505191204/https://www.lesswrong.com/posts/5Jmhdun9crJGAJGyy/why-are-we-so-complacent-about-ai-hell for longer previous discussion on it.

Reply
[-]Nate Showell2y10

Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.

Reply
[-]Dagon2y*1-6
  • There's a wide variance in how "suffering" is perceived, weighted, and (dis)valued, and no known resolution to different intuitions about it.  

There's no real agreement on what S-risks even are, and whether they're anything but a tiny subset of other X-risks.

  • Many people care less about (others) suffering than they do about positive-valence experience (of others).  This may or may not be related to the fact that suffering is generally low-status and satisfaction/meaning is high-status.
Reply
Moderation Log
More from Ratios
View more
Curated and popular this week
4Comments