Outer Alignment is the Necessary Compliment to AI 2027's Best Case Scenario — LessWrong