LESSWRONG
LW

3054
Meta-theory of rationality

Meta-theory of rationality

Feb 20, 2025 by Cole Wyeth

Here I speculate about questions such as:

What makes a theory of rationality useful or useless?

When is a theory of rationality useful for building agents, describing agents, or becoming a better agent, and to what extent should the answers be connected?

 How elegant should we expect algorithms for intelligence to be?

What concepts deserve to be promoted to the root/core design of an AGI versus discovered by AGI? Perhaps relatedly, does human cognition have such a root/core algorithm, and if so, what is it?

11Levels of analysis for thinking about agency
Ω
Cole Wyeth
8mo
Ω
0
20Action theory is not policy theory is not agent theory
Ω
Cole Wyeth
2y
Ω
4
16What makes a theory of intelligence useful?
Ω
Cole Wyeth
8mo
Ω
0
28Existing UDTs test the limits of Bayesianism (and consistency)
Ω
Cole Wyeth
7mo
Ω
21
49Glass box learners want to be black box
Ω
Cole Wyeth
5mo
Ω
10
27Modeling versus Implementation
Ω
Cole Wyeth
5mo
Ω
10
26Pitfalls of Building UDT Agents
Ω
Cole Wyeth
3mo
Ω
5