We investigate whether Constitutional AI-style character training can increase robustness to Emergent Misalignment (EM). We take 11 character-trained personas produced by the OpenCharacterTraining pipeline and fine-tune each on corrupted data designed to induce EM, evaluating on 3 out-of-domain datasets that test emergent generalization. On Qwen 2.5 7B, we find that every character-trained model reduces the rate of critically misaligned responses compared to baseline after EM fine-tuning, regardless of persona type - even personas with no obvious safety relevance (e.g., humor, poeticism, mathematical) confer protection. On Llama 3.1 8B, most personas are strongly protective, but the pattern is model-dependent: sarcasm training catastrophically amplifies misalignment on Llama while remaining protective on Qwen. We further design custom constitutions incorporating metacommunicative traits - the ability to distinguish message levels, surface conflicting signals, and resist covert frame shifts - and find the best-performing variant (Goodness-Meta-V2) achieves the lowest critical misalignment rate of any persona tested. We also test contextualized reflection, where each corrupted training sample is augmented with the model's own self-assessment generated before fine-tuning, and find it provides moderate but lesser protection than full character training. Mechanistic analysis of internal activations shows that character-trained models resist movement along the misalignment direction across all layers during EM fine-tuning. Drawing on Bateson's theory of logical types in communication[1], we argue that Emergent Misalignment can be understood as a failure of logical type discrimination: the model treats narrow training data as globally identity-defining because it cannot distinguish the level at which the signal operates. Constitutional AI, through its structure of diverse situational training and introspective self-reflection, appears to develop this discriminative capacit