LESSWRONG
LW

851
Wikitags

Deconfusion

Edited by abramdemski last updated 17th Mar 2021

Narrowly, deconfusion is a specific branch of AI alignment research, discussed in MIRI's 2018 research update. More broadly, the term applies to any domain. Quoting from the research update:

By deconfusion, I mean something like “making it so that you can think about a given topic without continuously accidentally spouting nonsense.”

Subscribe
Discussion
1
Subscribe
Discussion
1
Posts tagged Deconfusion
16
63Looking Deeper at Deconfusion
Ω
adamShimi
4y
Ω
13
11
72Builder/Breaker for Deconfusion
Ω
abramdemski
3y
Ω
9
11
28Traps of Formalization in Deconfusion
Ω
adamShimi
4y
Ω
7
9
53On MIRI's new research directions
Rob Bensinger
7y
12
5
171. A Sense of Fairness: Deconfusing Ethics
Ω
RogerDearnaley
2y
Ω
8
5
12Deconfusing ‘AI’ and ‘evolution’
Remmelt
4mo
11
2
144Exercises in Comprehensive Information Gathering
johnswentworth
6y
18
2
136Deconfusing Direct vs Amortised Optimization
Ω
beren
3y
Ω
19
2
91Modelling Transformative AI Risks (MTAIR) Project: Introduction
Ω
Davidmanheim, Aryeh Englander
4y
Ω
0
2
76My research agenda in agent foundations
Alex_Altair
2y
9
2
69Strategy is the Deconfusion of Action
ryan_b
7y
4
2
38Applications for Deconfusing Goal-Directedness
Ω
adamShimi
4y
Ω
3
2
35Classification of AI alignment research: deconfusion, "good enough" non-superintelligent AI alignment, superintelligent AI alignment
philip_b
5y
25
2
31Musings on general systems alignment
Ω
Alex Flint
4y
Ω
11
2
26Deceptive Alignment and Homuncularity
Oliver Sourbut, TurnTrout
10mo
12
Load More (15/36)
Add Posts