Toward Meta-Free Alignment: From Persistence to Unified Generative Viability
Motivation In AI alignment research, it is often assumed that meta-level governance structures are necessary: external rule stacks, utility aggregators, or meta-ethical overseers that constrain the agent. Yet such meta-constraints may themselves introduce fragility: Who specifies them? How are they audited? And how do we avoid regress into “meta of...
Sep 9, 20251