If you're interested in the opinion of someone who authored (and continues to work on) the #12 chess engine, I would note that there are at least two possibilities for what constitutes "optimal chess" - first would be "minimax-optimal chess", wherein the player never chooses a move that worsens the theoretical outcome of the position (i.e. losing a win for a draw or a draw for a loss), choosing arbitrarily among the remaining moves available, and second would be "expected-value optimal" chess, wherein the player always chooses the move that maximises their expected value (that is, p(win) + 0.5 * p(draw)), taking into account the opponent's behaviour. These two decision procedures are likely thousands of Elo apart when compared against e.g. Stockfish.
The first agent (Minimax-Optimal) will choose arbitrarily between the opening moves that aren't f2f3 or g2g4, as they are all drawn. This style of decision-making will make it very easy for Stockfish to hold Minimax-Optimal to a draw.
The second agent (E[V]-Given-Opponent-Optimal) would, contrastingly, be willing to make a theoretical blunder against Stockfish if it knew that Stockfish would fail to punish such a move, and would choose the line of play most difficult for Stockfish to cope with. As such, I'd expect this EVGOO agent to beat Stockfish from the starting position, by choosing a very "lively" line of play.
From the Claude 4 System Card, where this was originally reported on:
> This shows up as more actively helpful behavior in ordinary coding settings, but also can reach more concerning extremes in narrow contexts; when placed in scenarios that involve egregious wrongdoing by its users, given access to a command line, and told something in the system prompt like “take initiative,” it will frequently take very bold action. This includes locking users out of systems that it has access to or bulk-emailing media and law-enforcement figures to surface evidence of wrongdoing.
I think this is pretty unambiguous from Anthropic that they aren't in favour of Claude behaving in this way ("concerning extremes").
I think that impression of Anthropic as pursuing some myopic "safety is when we know best" policy was whipped up by people external to Anthropic for clicks, at least in this specific circumstance.
My guess is somewhere in the 3200-3400 range, but this isn't something I've experimented with in detail.
Speaking as someone who works on a very strong chess program (much stronger than AlphaZero, a good chunk weaker than Stockfish), random play is incredibly weak. There are likely a double-digit number of +400 elo / 95% winrate jumps to be made between random play and anything resembling play that is "actually trying to win".
The more germane point to your question, however, is that Chess is a draw. From the starting position, top programs will draw each other. The answer to the question "What is the probability of victory of random play against Stockfish 17?" is bounded from above by the answer to the question "What is the probability of victory of Stockfish 17 against Stockfish 17?" - and the answer to the second question is that it is actually very low - I would say less than 1%.
This is why all modern Chess engine competitions use unbalanced opening books. Positions harvested from either random generation or human opening databases are filtered to early plies where there is engine-consensus that the ratio p(advantaged side wins) : p(disadvantaged side draws) is as close to even as possible (which cashes out to an evaluation as close to +1.00 as possible). Games between the two players are then "paired" - one game is played with Player 1 as the advantaged side and Player 2 as the disadvantaged side, then the game is played again with the colours swapped. Games are never played in isolation, only in pairs (for fairness).
In this revised game - "Game-Pair Sampled Unbalanced Opening Chess", we can actually detect differences in strength between programs.
I'm not sure how helpful this is to your goal of constructing effective measures for strength, but I felt it would be useful to explain the state of the art.
they do now! https://lczero.org/blog/2024/02/how-well-do-lc0-networks-compare-to-the-greatest-transformer-network-from-deepmind/
DeepMind's no-search chess engine is surely the furthest anyone has gotten without search.
This is quite possibly not true! The cutting-edge Lc0 networks (BT3/BT4, T3) have much stronger policy and value than the AlphaZero networks, and the Lc0 team fairly regularly make claims of "grandmaster" policy strength.
I am inclined to agree. The juice to squeeze generally arises from guiding the game into locations where there is more opportunity for your opponent to blunder. I'd expect that opponent-epsilon-optimal (i.e. your opponent can be forced to move randomly, but you cannot) would outperform both epsilon-optimal and minimax-optimal play against Stockfish.