LESSWRONG
LW

212
Wikitags

Myopia

Edited by Dakara, abramdemski, et al. last updated 30th Dec 2024

Myopia refers to short-sightedness in planning and decision-making processes. It describes a tendency to prioritize immediate or short-term outcomes while disregarding longer-term consequences.

The most extreme form of myopia occurs when an agent considers only immediate rewards, completely disregarding future consequences. In artificial intelligence contexts, a perfectly myopic agent would optimize solely for the current query or task without attempting to influence future outcomes.

Myopic agents demonstrate several notable properties:

  • Limited temporal scope in decision-making
  • Focus on immediate reward optimization
  • Reduced instrumental incentives
Subscribe
Discussion
Subscribe
Discussion
Posts tagged Myopia
1
670Simulators
Ω
janus
3y
Ω
168
2
364The Parable of Predict-O-Matic
Ω
abramdemski
6y
Ω
43
2
221An overview of 11 proposals for building safe advanced AI
Ω
evhub
5y
Ω
37
7
138How LLMs are and are not myopic
Ω
janus
2y
Ω
16
2
81MONA: Managed Myopia with Approval Feedback
Ω
Seb Farquhar, David Lindner, Rohin Shah
10mo
Ω
30
2
160Seeking Power is Often Convergently Instrumental in MDPs
Ω
TurnTrout, Logan Riggs
6y
Ω
39
3
97You can still fetch the coffee today if you're dead tomorrow
Ω
davidad
3y
Ω
19
2
79Thoughts on “Process-Based Supervision” / MONA
Ω
Steven Byrnes
2y
Ω
4
9
107The Credit Assignment Problem
Ω
abramdemski
6y
Ω
41
2
31MONA: Three Month Later - Updates and Steganography Without Optimization Pressure
Ω
David Lindner, Vikrant Varma
7mo
Ω
0
1
73AI safety via market making
Ω
evhub
5y
Ω
45
6
67Open Problems with Myopia
Ω
Mark Xu, evhub
5y
Ω
16
1
58Interpretability’s Alignment-Solving Potential: Analysis of 7 Scenarios
Ω
Evan R. Murphy
4y
Ω
0
11
76Partial Agency
Ω
abramdemski
6y
Ω
18
5
57LCDT, A Myopic Decision Theory
Ω
adamShimi, evhub
4y
Ω
50
Load More (15/40)
Add Posts