Code Generation as an AI risk setting — LessWrong