When fine-tuning fails to elicit GPT-3.5's chess abilities — LessWrong