This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Research Agendas
•
Applied to
What should AI safety be trying to achieve?
by
EuanMcLean
10d
ago
•
Applied to
Announcing Human-aligned AI Summer School
by
Jan_Kulveit
11d
ago
•
Applied to
EIS XIII: Reflections on Anthropic’s SAE Research Circa May 2024
by
scasper
12d
ago
•
Applied to
The Prop-room and Stage Cognitive Architecture
by
Robert Kralisch
1mo
ago
•
Applied to
Speedrun ruiner research idea
by
lukehmiles
2mo
ago
•
Applied to
Constructability: Plainly-coded AGIs may be feasible in the near future
by
Charbel-Raphaël
2mo
ago
•
Applied to
Sparsify: A mechanistic interpretability research agenda
by
Marius Hobbhahn
2mo
ago
•
Applied to
Gradient Descent on the Human Brain
by
Jozdien
2mo
ago
•
Applied to
Towards White Box Deep Learning
by
Maciej Satkiewicz
2mo
ago
•
Applied to
Natural abstractions are observer-dependent: a conversation with John Wentworth
by
Martín Soto
4mo
ago
•
Applied to
Gaia Network: An Illustrated Primer
by
Rafael Kaufmann Nedal
4mo
ago
•
Applied to
Worrisome misunderstanding of the core issues with AI transition
by
Roman Leventov
4mo
ago
•
Applied to
Four visions of Transformative AI success
by
Steven Byrnes
5mo
ago
•
Applied to
Research Jan/Feb 2024
by
jacobjacob
5mo
ago
•
Applied to
The Plan - 2023 Version
by
Thane Ruthenis
5mo
ago
•
Applied to
Assessment of AI safety agendas: think about the downside risk
by
Roman Leventov
5mo
ago
•
Applied to
The 'Neglected Approaches' Approach: AE Studio's Alignment Agenda
by
Cameron Berg
5mo
ago