Constraining Minds, Not Goals: A Structural Approach to AI Alignment — LessWrong