This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Risks of Astronomical Suffering (S-risks)
•
Applied to
AE Studio @ SXSW: We need more AI consciousness research (and further resources)
by
Cameron Berg
1mo
ago
•
Applied to
Complexity of value but not disvalue implies more focus on s-risk. Moral uncertainty and preference utilitarianism also do.
by
Adam Zerner
2mo
ago
•
Applied to
Old man's story
by
RomanS
4mo
ago
•
Applied to
Taking Into Account Sentient Non-Humans in AI Ambitious Value Learning: Sentientist Coherent Extrapolated Volition
by
Adrià Moret
5mo
ago
•
Applied to
Sentience Institute 2023 End of Year Summary
by
michael_dello
5mo
ago
•
Applied to
Making AIs less likely to be spiteful
by
Maxime Riché
6mo
ago
•
Applied to
Rosko’s Wager
by
Wuksh
1y
ago
•
Applied to
(Crosspost) Asking for online calls on AI s-risks discussions
by
jackchang110
1y
ago
•
Applied to
Briefly how I've updated since ChatGPT
by
rime
1y
ago
•
Applied to
The Security Mindset, S-Risk and Publishing Prosaic Alignment Research
by
lukemarks
1y
ago
•
Applied to
How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
by
JohnGreer
1y
ago
•
Applied to
How likely do you think worse-than-extinction type fates to be?
by
span1
1y
ago
•
Applied to
The Waluigi Effect (mega-post)
by
Maxime Riché
1y
ago
•
Applied to
AI alignment researchers may have a comparative advantage in reducing s-risks
by
Tristan Cook
1y
ago