Scaffolded LLMs: Less Obvious Concerns — LessWrong