Thank you, very interesting paper (and it is good to see that Kramnik has kept up an active interest in chess computing, hehe).
The paper you cite seems to converge nicely with evidence that large transformer model embeddings for language comprehension map on to human semantic understand (e.g.,:
Related, it appears we are even approaching a refined normative theory & practical benchmarking of hyperparameter transfer, which may be very roughly akin ... (read more)
Thank you, very interesting paper (and it is good to see that Kramnik has kept up an active interest in chess computing, hehe).
The paper you cite seems to converge nicely with evidence that large transformer model embeddings for language comprehension map on to human semantic understand (e.g.,:
https://www.nature.com/articles/s41598-022-20460-9
https://www.nature.com/articles/s42003-022-03036-1 )
Related, it appears we are even approaching a refined normative theory & practical benchmarking of hyperparameter transfer, which may be very roughly akin ... (read more)