I just played Gemini 3, Claude 4.5 Opus and GPT 5.1 at chess.
It was just one game each but the results seemed pretty clear - Gemini was in a different league to the others. I am a 2000+ rated player (chess.com rapid), but it successfully got a winning position multiple times against me, before eventually succumbing on move 25. GPT 5.1 was worse on move 9 and losing on move 12, and Opus was lost on move 13.
Hallucinations held the same pattern - ChatGPT hallucinated for the first time on move 10, and hallucinated the most frequently, while Claude hallucinated for the first time on move 13 and Gemini made it to move 20, despite playing a more intricate and complex game (I struggled significantly more against it).
Gemini was also the only AI to go for the proper etiquette of resigning once lost - GPT just kept on playing down a ton of pieces, and Claude died quickly.
Games:
Gemini: https://lichess.org/5mdKZJKL#50
Claude: https://lichess.org/Ht5qSFRz#55
GPT: https://lichess.org/IViiraCf
I was white in all games.
Dumb idea for dealing with distribution shift in Alignment:
Use your alignment scheme to train the model on a much wider distribution than deployment; this is one of the techniques used to ensure proper generalisation of training of quadruped robots in this paper.
It seems to me that if you make your training distribution wide enough this should be sufficient to cover any deployment distribution shift.
I fully expect to be wrong and look forward to finding out why in the comments.
I had a couple of thoughts about consciousness a while back. Failing to find the time to expand them into a proper post, I've decided to put them here instead.
Disclaimer: While I don't have formal training in consciousness studies, I've been thinking about these questions and would welcome feedback from those more knowledgeable.
Definition of consciousness used here: having subjective experiences (qualia)
Thought 1: The Space of Possible Consciousnesses
Thought 2: Different experiences of color vision