This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Risks of Astronomical Suffering (S-risks)
•
Applied to
Rosko’s Wager
by
Wuksh
23d
ago
•
Applied to
(Crosspost) Asking for online calls on AI s-risks discussions
by
jackchang110
24d
ago
•
Applied to
Why aren’t more of us working to prevent AI hell?
by
Dawn Drescher
1mo
ago
•
Applied to
Briefly how I've updated since ChatGPT
by
rime
1mo
ago
•
Applied to
The Security Mindset, S-Risk and Publishing Prosaic Alignment Research
by
marc/er
2mo
ago
•
Applied to
What's the opposite of "s-risk"?
by
cSkeleton
2mo
ago
•
Applied to
How likely are scenarios where AGI ends up overtly or de facto torturing us? How likely are scenarios where AGI prevents us from committing suicide or dying?
by
JohnGreer
2mo
ago
•
Applied to
How likely do you think worse-than-extinction type fates to be?
by
span1
2mo
ago
•
Applied to
The Waluigi Effect (mega-post)
by
Maxime Riché
3mo
ago
•
Applied to
AI alignment researchers may have a comparative advantage in reducing s-risks
by
Tristan Cook
4mo
ago
•
Applied to
Accurate Models of AI Risk Are Hyperexistential Exfohazards
by
Thane Ruthenis
5mo
ago
•
Applied to
The case against AI alignment
by
andrew sauer
6mo
ago
•
Applied to
Likelihood of hyperexistential catastrophe from a bug?
by
Noosphere89
7mo
ago
•
Applied to
New book on s-risks
by
Ruby
7mo
ago
•
Applied to
Should you refrain from having children because of the risk posed by artificial intelligence?
by
RobertM
9mo
ago
•
Applied to
How likely do you think worse-than-extinction type fates to be?
by
Ruby
10mo
ago
•
Applied to
Paradigm-building from first principles: Effective altruism, AGI, and alignment
by
Cameron Berg
1y
ago