This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
LW
Login
1161
Why Not Try Build Safe AGI?
Why Not Try Build Safe AGI?
Copy-pasting from my one-on-ones with AI Safety researchers:
-3
Why mechanistic interpretability does not and cannot contribute to long-term AGI safety (from messages with a friend)
Remmelt
3y
9
6
List #1: Why stopping the development of AGI is hard but doable
Remmelt
3y
11
1
List #2: Why coordinating to align as humans to not develop AGI is a lot easier than, well... coordinating as humans with AGI coordinating to be aligned with humans
Remmelt
3y
0
4
List #3: Why not to assume on prior that AGI-alignment workarounds are available
Remmelt
3y
1