This is a special post for quick takes by Ratios. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
S-risks are barely discussed in LW, is that because:
See https://web.archive.org/web/20230505191204/https://www.lesswrong.com/posts/5Jmhdun9crJGAJGyy/why-are-we-so-complacent-about-ai-hell for longer previous discussion on it.
Mostly the first reason. The "made of atoms that can be used for something else" piece of the standard AI x-risk argument also applies to suffering conscious beings, so an AI would be unlikely to keep them around if the standard AI x-risk argument ends up being true.
There's no real agreement on what S-risks even are, and whether they're anything but a tiny subset of other X-risks.