I agree with most of this, but I think you're typical-minding when you assume that successionists are using this to resolve their own fear or sadness surrounding AI progress. I think instead, they mostly never seriously consider the downsides because of things like the progress heuristic. They never experience the fear or sadness you refer to in the first place. For them, it is not "painful to think about" as you describe.
Here is Eliezer's post on this topic from 17 years ago for anyone interested: https://www.lesswrong.com/posts/3Jpchgy53D2gB5qdk/my-childhood-role-model
Anna Salamon's comment and Eliezer's reply to it are particularly relevant.
Searching the keyword "prompt engineering" (both on here and Google) may guide you to some helpful resources. Sorry I don't have anything specific to link you to.
No massive advance (no GPT-5, or disappointing GPT-5)
Inversion: There was a substantial advance in frontier model AI in 2024.
Shouldn't the inversion simply be "There was a massive advance"?
If you have Long COVID or ME/CFS, or want to learn more about them, I highly recommend https://s4me.info. The signal to noise ratio is much better than on other forums for those topics that I've found. The community is good at recognizing and critiquing low vs high quality studies.
As an example of the quality, this factsheet created by the community is quite good: https://s4me.info/docs/WhatIsMECFS-S4ME-Factsheet.pdf
Did you and GPT4 only output the moves, or did you also output the board state after each turn?
Unfortunately without speaker labels the YouTube transcript is less useful unless you're listening while reading.
Is there a transcript anywhere?
Another similar result was that AlphaFold was trained on its own high-confidence predictions for protein sequences with unknown structures:
The AlphaFold architecture is able to train to high accuracy using only supervised learning on PDB data, but we are able to enhance accuracy (Fig. 4a) using an approach similar to noisy student self-distillation35. In this procedure, we use a trained network to predict the structure of around 350,000 diverse sequences from Uniclust3036 and make a new dataset of predicted structures filtered to a high-confidence subset. We then train the same architecture again from scratch using a mixture of PDB data and this new dataset of predicted structures as the training data, in which the various training data augmentations such as cropping and MSA subsampling make it challenging for the network to recapitulate the previously predicted structures. This self-distillation procedure makes effective use of the unlabelled sequence data and considerably improves the accuracy of the resulting network.
I suggest posting this to s4me.info, which is the best forum I know of for Long COVID and ME/CFS. They have a thread on metformin here, and if you search metformin there are various other threads about studies on it as well.