Maybe I can't :] but it is beatable by top humans. I bet I could win against a god with queen + knight odds.
My actual point was not about the specific configuration, but rather the general claim that what is important is how balanced the game you play is, and that you can beat an infinitely intelligent being in sufficiently unbalanced games.
I agree that defining what game we are playing is important. However, I'm not sure about the claim that Stockfish would win if it were trained to win at queen odds or other unusual chess variants. There are many unbalanced games we can invent where one side has a great advantage over the other. Actually, there is a version of Leela that was specifically trained with knight odds. It is an interesting system because it creates balanced games against human grandmasters. However, I would guess that even if you invested trillions of dollars to train a chess engine on queen odds, humans would still be able to reliably beat it.
I'm not sure where the analogy breaks down, but I do think we should focus more on the nature of the game that we play with superintelligence in the real world. The game could change, for example, when a superintelligence escapes the labs and all their mitigation tools, making the game more balanced for itself (for example, by running itself in a distributed manner on millions of personal computers it has hacked).
As long as we make sure that the game stays unbalanced, I do think we have a chance to mitigate the risks.
I agree that the analogy is not perfect. Can you elaborate on why you think this is a complete disanalogy?