This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Alignment & Agency
LW
Login
Alignment & Agency
131
An Orthodox Case Against Utility Functions
Ω
abramdemski
3y
Ω
53
111
The Pointers Problem: Human Values Are A Function Of Humans' Latent Variables
Ω
johnswentworth
2y
Ω
44
158
Alignment By Default
Ω
johnswentworth
3y
Ω
94
202
An overview of 11 proposals for building safe advanced AI
Ω
evhub
3y
Ω
36
227
The ground of optimization
Ω
Alex Flint
3y
Ω
74
97
Search versus design
Ω
Alex Flint
3y
Ω
41
178
Inner Alignment: Explain like I'm 12 Edition
Ω
Rafael Harth
3y
Ω
46
84
Inaccessible information
Ω
paulfchristiano
3y
Ω
17
114
AGI safety from first principles: Introduction
Ω
Richard_Ngo
2y
Ω
18
277
Is Success the Enemy of Freedom? (Full)
alkjash
2y
68