This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Mild Optimization
•
Applied to
How to safely use an optimizer
by
Simon Fischer
5mo
ago
•
Applied to
[Aspiration-based designs] 2. Formal framework, basic algorithm
by
Simon Fischer
5mo
ago
•
Applied to
[Aspiration-based designs] 1. Informal introduction
by
Simon Fischer
5mo
ago
•
Applied to
AISC project: SatisfIA – AI that satisfies without overdoing it
by
Jobst Heitzig
10mo
ago
•
Applied to
Aspiration-based Q-Learning
by
Jobst Heitzig
11mo
ago
•
Applied to
AISC team report: Soft-optimization, Bayes and Goodhart
by
Simon Fischer
1y
ago
•
Applied to
Requirements for a STEM-capable AGI Value Learner (my Case for Less Doom)
by
RogerDearnaley
1y
ago
•
Applied to
Why don't quantilizers also cut off the upper end of the distribution?
by
Alex_Altair
1y
ago
•
Applied to
Thinking about maximization and corrigibility
by
James Payor
1y
ago
•
Applied to
"Corrigibility at some small length" by dath ilan
by
Christopher King
1y
ago
•
Applied to
The Optimizer's Curse and How to Beat It
by
Roger Dearnaley
2y
ago
•
Applied to
Breaking the Optimizer’s Curse, and Consequences for Existential Risks and Value Learning
by
Roger Dearnaley
2y
ago
•
Applied to
Validator models: A simple approach to detecting goodharting
by
beren
2y
ago
•
Applied to
Reward is not Necessary: How to Create a Compositional Self-Preserving Agent for Life-Long Learning
by
Roman Leventov
2y
ago
•
Applied to
Soft optimization makes the value target bigger
by
plex
2y
ago
•
Applied to
Steam
by
abramdemski
2y
ago