I replicated the Anthropic alignment faking experiment on other models, and they didn't fake alignment — LessWrong