Liar Paradox Revisited
Agent Foundation Foundations and the Rocket Alignment Problem
[Question]Would solving logical counterfactuals solve anthropics?
[Question]Is there a difference between uncertainty over your utility function and uncertainty over outcomes?
Deconfusing Logical CounterfactualsΩ
[Question]Is Agent Simulates Predictor a "fair" problem?Ω
Debate AI and the Decision to Release an AI
[Question]Which approach is most promising for aligned AGI?
On Abstract Systems