Clarifying “wisdom”: Foundational topics for aligned AIs to prioritize before irreversible decisions — LessWrong