Plan 'Straya
Plan 'Straya: A Comprehensive Alignment Strategy Version 0.3 — DRAFT — Not For Distribution Outside The Pub Epistemic status: High confidence, low evidence. Consistent with community norms. Executive Summary Existing alignment proposals suffer from a shared flaw: they assume you can solve the control problem before the catastrophe. Plan 'Straya boldly inverts this. We propose achieving alignment the way humanity has historically achieved most of its moral progress — by first making every possible mistake, losing nearly everything, and then writing a strongly-worded resolution about it afterward. The plan proceeds in three rigorously defined phases. Phase 1: Anticorruption Measures (Kinetic) The scholarly literature on AI governance emphasises that institutional integrity is a prerequisite for safe deployment. We agree. Where we diverge from the mainstream is on methodology. Most proposals suggest "regulatory frameworks" and "oversight bodies." The NIST AI Risk Management Framework provides a voluntary set of guidelines that organisations may choose to follow, partially follow, or simply reference in press releases. The EU AI Act classifies systems into risk tiers with the quiet confidence of a taxonomy that will be obsolete before its implementing regulations are finalised. The Frontier Model Forum, meanwhile, brings together the leading AI laboratories in a spirit of cooperative self-governance, a phrase which here means "a shared Google Doc and quarterly meetings in San Francisco." These approaches share a well-documented failure mode: the people staffing them are, in technical terms, politicians. Plan 'Straya addresses this via what we call "a vigorous personnel restructuring of the Australian federal and state governments," targeting specifically those members identified as corrupt. We acknowledge that the identification mechanism — determining which officials are corrup
@yanni kyriacos