In chess, AIs are very superhuman; the best players in the world would lose nearly every game against any modern computer player.

Do humans still have something to add? The continued existence of correspondence chess, IMO, suggests that they do. In correspondence chess players have days to make each move, and play from their homes. Due to the impossibility of policing cheating under these conditions, correspondence players are allowed to use computer assistance.

You might think this would make the games just a question of who has more computing power. But as far as I can tell, that’s not the case.

What are humans adding? Low confidence, but I think it’s mostly opening prep; try to find a line that looks ok on shallow computer analysis, but where deeper analysis shows you have an advantage. The human value-add is telling the computer which lines to analyze. Since the chess game tree is so large, advice like this is quite valuable.

Proof that correspondence chess is still played: Interview with a human player (from 2016):

On the other hand: I don’t play correspondence chess, so I’m not that confident in the claims above. And some people don’t find them plausible:

Why should we care? This might provide some indication of what value humans can provide in a world of superhuman AI (at least initially).

Can anyone provide a more definitive account of what value, if any, humans add in correspondence chess?

New Answer
Ask Related Question
New Comment

3 Answers sorted by

I believe the answer is potentially. The main things which matter in high-level correspondence chess are:

  1. Total amount of compute available to players
  2. Not making errors

Although I don't think either of those are really relevant. The really relevant bit is (apparently) planning:

For me, the key is planning, which computers do not do well — Petrosian-like evaluations of where pieces belong, what exchanges are needed, and what move orders are most precise within the long-term plan.

(From this interview with Jon Edwards (reigning correspondence world champion) from New In Chess)

I would highly recommend the interview on Perpetual Chess podcast also with Jon Edwards  which I would also recommend.

I'll leave you with this final quote, which has stuck with me for ages:

The most important game in the Final was my game against Osipov. I really hoped to win in order to extend my razor-thin lead, and the game’s 119 moves testify to my determination. In one middlegame sequence, to make progress, I had to find a way to force him to advance his b-pawn one square, all while avoiding the 50-move rule. I accomplished the feat in 38 moves, in a sequence that no computer would consider or find. Such is the joy of high-level correspondence chess. Sadly, I did not subsequently find a win. But happily, I won the Final without it!

Interesting. Note that Jon Edwards didn't win a single game there via play - he won one game because the opponent inputted the wrong move, and another because the opponent quit the tournament. All other games were draws.

Agreed - as I said, the most important things are compute and dilligence. Just because a large fraction of the top games are draws doesn't really say much about whether or not there is an edge being given by the humans (A large fraction of elite chess games are draws, but no-one doubts there are differences in skill level there). Really you'd want to see Jon Edward's setup vs a completely untweaked engine being administered by a novice.
I agree. Judging by the fact that AI is strongly superhuman in chess, the only winning strategy is to completely remove the human from the loop, and instead invest in as much compute for the AI as one can afford.  If it's a sequence that no superhuman AI would consider, this means that the sequence is inferior to the much better sequences that the AI would consider.  It seems that even after 2 decades of the complete AI superiority, some top chess players are still imagining that they are in some ways better at chess than the AI, even if they can't win against it. 
5Yair Halberstadt4d
If you look at the actual scenario there, the game was essentially in a stalemate, where the only possible way to win was to force the other player to advance a pawn. Stockfish can't look 30 moves ahead to see that it's possible to do that, so would have just flailed around. You still need stockfish, because without it, any move you make could be a tactical error which the other players computer would pounce on. But stockfish can't see the greater strategic picture if it's beyond its tactical horizon.
This seems needlessly narrow minded. Just because AI is better than humans doesn't make it uniformly better than humans in all subtasks of chess. I don't know enough about the specifics that this guy is talking about (I am not an expert) but I do know that until the release of NN-based algorithms most top players were still comfortable talking about positions where the computer was mis-evaluating positions soon out of the opening. To take another more concrete example - computers were much better than humans in 2004, and yet Peter Leko still managed to refute a computer prepared line OTB in a world championship game.

Yes, humans still provide value. Correspondence chess players will for example read chess opening books to find if there any mistakes in that book and even if they find just one, they'll try to lead their opponent into that dubious line, which is often a mistake that computers can't easily spot. Also as a former highly-ranked chess player, I'd use multiple chess engines at the same time to compare and contrast and also I'd know their strengths and weaknesses and which possibilities to explore. 

Time should also be a factor when comparing strength between AI alone and an AI-human team. Humans might add to correspondence chess but it will cost them a significant amount of time. Human-AI teams are very slow compared to AI alone.

For example in low latency algorithmic stock trading reaction times are below 10ms. Human reaction time is 250ms. A human-AI cooperation of stock traders would have a minimum reaction time of 250ms (if the human immediatly agrees when the AI suggests a trade), This is way to slow and means a serious competitive disadvantage.&... (read more)

1Jonathan Paulson3d
I think you are underrating the number of high-stakes decisions in the world. A few examples: whether or not to hire someone, the design of some mass-produced item, which job to take, who to marry. There are many more. These are all cases where making the decision 100x faster is of little value, because it will take a long time to see if the decision was good or not after it is made. And where making a better decision is of high value. (Many of these will also be the hardest tasks for AI to do well on, because there is very little training data about them).
True, it depends on the ratio mundane and high stakes decisions. Athough there are high stakes decisions that are also time dependant. See the example about high frequency trading (no human in the loop and the algorithm makes trades in the millions).   Furthermore your conclusion that time independant high stakes decisions will be the tasks where humans provide most value seems true to me. AI will easily be superior when there are time constraint. Absent such constraints, humans will have a better chance of competing with AI. And economic strategic decisions are often times not extremely time constrained (at least a couple of hours or days of time). In economic situations the amount of high stakes decisions will be limited  (only a few people make desicions about large sums of money and strategy) . Given a multinational with a 100.000 employees, only very few will take high stake decisions. But these decisions might have a significant impact on competitiveness. Thus the multinational with a human ceo might out compete a full AI company.  In a strategic situation time might give more of an advantage (i am economist not a military expert so I am really guessing here). My guess would be that a drone without a human in the loop could have a significant advantage (thus pressures might rise to push for high stake decision making by drones (human lives)). 
6 comments, sorted by Click to highlight new comments since: Today at 7:39 AM

Presumably you could test this directly. Have 100 players play a correspondence game against each of the top 3 chess engines, where you gave them a max amount of compute time they could use (say no more than 2 hours per move) and they just play directly against the computer.

My guess is they would have a slight advantage, at least over stockfish, if only by exploiting known bugs/adversarial situations.

FWIW, the two main online chess sites forbid the use of engines in correspondence games. But both do allow the use of opening databases. 


Do humans still have something to add? The continued existence of correspondence chess, IMO, suggests that they do.

Sorry, but that is absurd: the fact that humans continue to play correspondence chess is only the most teeniest tiniest evidence that humans have something to add.

I am interpreting, "humans have something to add," as "the best human player using computers still has some kind of advantage over a computer alone where no human is allowed to intervene in the operation of the computer over the course of the game except perhaps to ensure a steady supply of electricity".

Note that I am not opining either way on your overall question. I am commenting only on what conclusions we can draw from the fact that people still play correspondence chess.

Why do you think so?

Presumably the people playing correspondence chess think that they are adding something, or they would just let the computer play alone. And it’s not a hard thing to check; they can just play against a computer and see. So it would surprise me if they were all wrong about this.

Human social behavior is complex. Maybe some or all of the winners of ICCF tournaments won by merely parroting the moves chosen by an engine, but they chose not to admit it out of a worry that admitting it would cause a change in the rules that would disadvantage them in future tournaments.

A document titled "2023 ICCF Rules" does not exactly explicitly encourage the parroting behavior I just described (though it does not explicitly disallow it either):

In ICCF event games, players must decide their own moves. Players are permitted to consult prior to those decisions with any publicly available source of information including chess engines (computer programs), books, DVDs, game archive databases, endgame tablebases, etc. . . . No other consultation with another person concerning analysis of an active position is allowed . . .

The only things I omitted from the paragraph I just quoted have to do with humans playing as a team.

I have quoted from the public written rules of the tournaments because public written information is all I have access to. Communities often develop unwritten rules that strongly influence human behavior -- rules we would have no way of knowing about without asking a community member in a context in which the member has some basis for trusting us.

It might be the case that ICCF's leaders see it (correctly IMO) as impossible to enforce a rule against chess engines, so they allow them as a practical measure, but they don't like them, and most of the winners know that, which again would tend to cause anyone who won by merely parroting moves chosen by an engine to choose not to announce that fact.

Or it might be that the majority of those with a megaphone that reaches the correspondence-chess community maintain that chess engines have ruined the once noble and delightful correspondence-chess scene, with again the same effect.

If the organizers of a tournament explicitly declared that one of the purposes of the tournament is to determine whether human-computer teams can outperform computers alone, then that would start to be evidence worth considering (against, e.g., the evidence provided by the overwhelming dominance of computers over human-computer teams in chess with other rules (other time controls to be specific) -- particularly if there was decent prize money.

New to LessWrong?