Capabilities and alignment of LLM cognitive architectures — LessWrong