x
Iterative Matrix Steering: Forcing LLMs to "Rationalize" Hallucinations via Subspace Alignment — LessWrong