This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Risks of Astronomical Suffering (S-risks)
•
Applied to
If AI starts to end the world, is suicide a good idea?
by
IlluminateReality
2mo
ago
•
Applied to
S-Risks: Fates Worse Than Extinction
by
aggliu
4mo
ago
•
Applied to
AE Studio @ SXSW: We need more AI consciousness research (and further resources)
by
Cameron Berg
6mo
ago
•
Applied to
Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do.
by
Adam Zerner
7mo
ago
•
Applied to
Old man's story
by
RomanS
9mo
ago
•
Applied to
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition
by
Adrià Moret
10mo
ago
•
Applied to
Sentience Institute 2023 End of Year Summary
by
michael_dello
10mo
ago
•
Applied to
Making AIs less likely to be spiteful
by
Maxime Riché
10mo
ago
•
Applied to
Rosko’s Wager
by
Wuksh
1y
ago
•
Applied to
(Crosspost) Asking for online calls on AI s-risks discussions
by
jackchang110
1y
ago
•
Applied to
Briefly how I've updated since ChatGPT
by
rime
1y
ago
•
Applied to
The Security Mindset, S-Risk and Publishing Prosaic Alignment Research
by
lukemarks
1y
ago
•
Applied to
How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
by
JohnGreer
1y
ago
•
Applied to
How likely do you think worse-than-extinction type fates to be?
by
span1
1y
ago