Calibrated Transparency: Causal Safety for Frontier AI — LessWrong