x
Attractors, Not Guidelines: Six “Why Not” Shifts for Safer AI — LessWrong