Verified Relational Alignment: A Framework for Robust AI Safety Through Collaborative Trust — LessWrong