This website requires javascript to properly function. Consider activating javascript to get access to all site functionality.
LESSWRONG
Tags
LW
Login
Agency
•
Applied to
Introduction to Towards Causal Foundations of Safe AGI
by
tom4everitt
8d
ago
•
Applied to
Intent-aligned AI systems deplete human agency: the need for agency foundations research in AI safety
by
catubc
15d
ago
•
Applied to
Think carefully before calling RL policies "agents"
by
TurnTrout
16d
ago
•
Applied to
Minimum Viable Exterminator
by
Richard Horvath
19d
ago
•
Applied to
Is "brittle alignment" good enough?
by
the8thbit
1mo
ago
•
Applied to
The Compleat Cybornaut
by
ukc10014
1mo
ago
•
Applied to
AGI safety from first principles: Goals and Agency
by
Mo Putera
1mo
ago
•
Applied to
We are misaligned: the saddening idea that most of humanity doesn't intrinsically care about x-risk, even on a personal level
by
Christopher King
1mo
ago
•
Applied to
Some Summaries of Agent Foundations Work
by
mattmacdermott
1mo
ago
•
Applied to
Towards Measures of Optimisation
by
mattmacdermott
1mo
ago
•
Applied to
Notes on the importance and implementation of safety-first cognitive architectures for AI
by
Brendon_Wong
1mo
ago
•
Applied to
Naturalist Experimentation
by
RobinGoins
1mo
ago
•
Applied to
Archetypal Transfer Learning and a Corrigibility-Friendly Optimization Technique
by
marc/er
1mo
ago
•
Applied to
Does agency necessarily imply self-preservation instinct?
by
Mislav Jurić
2mo
ago
•
Applied to
Why do we care about agency for alignment?
by
Ruby
2mo
ago
•
Applied to
We Need To Know About Continual Learning
by
michael_mjd
2mo
ago