This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
AI Alignment Fieldbuilding
•
Applied to
Are AI developers playing with fire?
by
the gears to ascension
6d
ago
•
Applied to
The best AI safety introductions for Japanese speakers 日本語話者にとって最適なAIセーフティの入門資料
by
trevor
7d
ago
•
Applied to
The humanity's biggest mistake
by
RomanS
12d
ago
•
Applied to
Aspiring AI safety researchers should ~argmax over AGI timelines
by
Ryan Kidd
20d
ago
•
Applied to
Problems of people new to AI safety and my project ideas to mitigate them
by
Igor Ivanov
22d
ago
•
Applied to
AGI doesn't need understanding, intention, or consciousness in order to kill us, only intelligence
by
James Blaha
1mo
ago
•
Applied to
Qualities that alignment mentors value in junior researchers
by
Akash
1mo
ago
•
Applied to
The Importance of AI Alignment, explained in 5 points
by
Raemon
1mo
ago
•
Applied to
The best way so far to explain AI risk: The Precipice (p. 137-149)
by
trevor
1mo
ago
•
Applied to
Many important technologies start out as science fiction before becoming real
by
trevor
1mo
ago
•
Applied to
so you think you're not qualified to do technical alignment research?
by
Raemon
2mo
ago
•
Applied to
You are probably not a good alignment researcher, and other blatant lies
by
junk heap homotopy
2mo
ago
•
Applied to
Retrospective on the AI Safety Field Building Hub
by
Vael Gates
2mo
ago
•
Applied to
“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers
by
Vael Gates
2mo
ago
•
Applied to
Predicting researcher interest in AI alignment
by
Vael Gates
2mo
ago
•
Applied to
AI Safety Arguments: An Interactive Guide
by
Lukas Trötzmüller
2mo
ago
•
Applied to
Interviews with 97 AI Researchers: Quantitative Analysis
by
Maheen Shermohammed
2mo
ago
•
Applied to
A Brief Overview of AI Safety/Alignment Orgs, Fields, Researchers, and Resources for ML Researchers
by
Austin Witte
2mo
ago