Mental subagent implications for AI Safety — LessWrong