This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
is fundraising!
Tags
LW
$
Login
Reading Group
•
Applied to
MATS AI Safety Strategy Curriculum v2
by
Ryan Kidd
2mo
ago
•
Applied to
2024 Summer AI Safety Intro Fellowship and Socials in Boston
by
KevinWei
6mo
ago
•
Applied to
MATS AI Safety Strategy Curriculum
by
Ryan Kidd
9mo
ago
•
Applied to
Mechanistic Interpretability Reading group
by
woog
1y
ago
•
Applied to
Announcing “Key Phenomena in AI Risk” (facilitated reading group)
by
particlemania
2y
ago
•
Applied to
Announcing: Mechanism Design for AI Safety - Reading Group
by
Raemon
2y
ago
•
Applied to
Books Worthy of Integration
by
Roven Skyfal
3y
ago
•
Applied to
Why you should try a live reading session
by
Nihal M
3y
ago
•
Applied to
The Scout Mindset - read-along
by
Yoav Ravid
4y
ago
•
Applied to
Request: Sequences book reading group
by
Multicore
4y
ago
•
Applied to
Recommended Rationalist Resources
4y
ago
•
Applied to
Superintelligence 29: Crunch time
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 28: Collaboration
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 27: Pathways and enablers
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 26: Science and technology strategy
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 25: Components list for acquiring values
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 24: Morality models and "do what I mean"
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 23: Coherent extrapolated volition
by
Gyrodiot
4y
ago
•
Applied to
Superintelligence 22: Emulation modulation and institutional design
by
Gyrodiot
4y
ago