LESSWRONG
LW

Wikitags

Deconfusion

Edited by abramdemski last updated 17th Mar 2021

Narrowly, deconfusion is a specific branch of AI alignment research, discussed in MIRI's 2018 research update. More broadly, the term applies to any domain. Quoting from the research update:

By deconfusion, I mean something like “making it so that you can think about a given topic without continuously accidentally spouting nonsense.”

Subscribe
1
Subscribe
1
Discussion0
Discussion0
Posts tagged Deconfusion
63Looking Deeper at Deconfusion
Ω
adamShimi
4y
Ω
13
72Builder/Breaker for Deconfusion
Ω
abramdemski
3y
Ω
9
28Traps of Formalization in Deconfusion
Ω
adamShimi
4y
Ω
7
53On MIRI's new research directions
Rob Bensinger
7y
12
171. A Sense of Fairness: Deconfusing Ethics
Ω
RogerDearnaley
2y
Ω
8
7Implicit and Explicit Learning
Remmelt
4d
2
143Exercises in Comprehensive Information Gathering
johnswentworth
5y
18
136Deconfusing Direct vs Amortised Optimization
Ω
beren
3y
Ω
19
91Modelling Transformative AI Risks (MTAIR) Project: Introduction
Ω
Davidmanheim, Aryeh Englander
4y
Ω
0
75My research agenda in agent foundations
Alex_Altair
2y
9
69Strategy is the Deconfusion of Action
ryan_b
7y
4
38Applications for Deconfusing Goal-Directedness
Ω
adamShimi
4y
Ω
3
35Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment
philip_b
5y
25
31Musings on general systems alignment
Ω
Alex Flint
4y
Ω
11
26Deceptive Alignment and Homuncularity
Oliver Sourbut, TurnTrout
6mo
12
Load More (15/34)
Add Posts