Bertrand Russell noted how people often describe the same factual behavior using emotionally opposite language depending on perspective — e.g. I am firm, you are obstinate, he is pigheaded. This framing tactic is now called a Russell Conjugation, and once you start noticing them, they’re everywhere — especially in politics and media.
For the past year and a half, I’ve been training a finetuned ChatGPT model, and building a tool to automatically highlight Russell Conjugations in text and suggest emotionally opposite alternatives. It functions as a fact-independent bias reverser — showing where emotional spin might exist, and how the opposite side might see an issue, regardless of the factual accuracy of specific claims.... (read 160 more words →)
Seeing how competent these newer models are was quite helpful, and I have started using ChatGPT o4-mini-high to generate training sets for a new finetuned model. It does quite well, and once I get the hang of this, I should be able to distill the results and improve the performance of my model much more quickly. Wish me luck!