LESSWRONG
LW

2113
Wikitags

Myopia

Edited by Dakara, abramdemski, et al. last updated 30th Dec 2024

Myopia refers to short-sightedness in planning and decision-making processes. It describes a tendency to prioritize immediate or short-term outcomes while disregarding longer-term consequences.

The most extreme form of myopia occurs when an agent considers only immediate rewards, completely disregarding future consequences. In artificial intelligence contexts, a perfectly myopic agent would optimize solely for the current query or task without attempting to influence future outcomes.

Myopic agents demonstrate several notable properties:

  • Limited temporal scope in decision-making
  • Focus on immediate reward optimization
  • Reduced instrumental incentives
Subscribe
Discussion
Subscribe
Discussion
Posts tagged Myopia
76Partial Agency
Ω
abramdemski
6y
Ω
18
107The Credit Assignment Problem
Ω
abramdemski
6y
Ω
40
137How LLMs are and are not myopic
Ω
janus
2y
Ω
16
47Towards a mechanistic understanding of corrigibility
Ω
evhub
6y
Ω
26
66Open Problems with Myopia
Ω
Mark Xu, evhub
5y
Ω
16
40Steering Behaviour: Testing for (Non-)Myopia in Language Models
Ω
Evan R. Murphy, Megan Kinniment
3y
Ω
19
57LCDT, A Myopic Decision Theory
Ω
adamShimi, evhub
4y
Ω
50
32Defining Myopia
Ω
abramdemski
6y
Ω
18
62Arguments against myopic training
Ω
Richard_Ngo
5y
Ω
39
97You can still fetch the coffee today if you're dead tomorrow
Ω
davidad
3y
Ω
19
364The Parable of Predict-O-Matic
Ω
abramdemski
6y
Ω
43
220An overview of 11 proposals for building safe advanced AI
Ω
evhub
5y
Ω
37
160Seeking Power is Often Convergently Instrumental in MDPs
Ω
TurnTrout, Logan Riggs
6y
Ω
39
81MONA: Managed Myopia with Approval Feedback
Ω
Seb Farquhar, David Lindner, Rohin Shah
8mo
Ω
30
74Thoughts on “Process-Based Supervision”
Ω
Steven Byrnes
2y
Ω
4
Load More (15/40)
Add Posts