I don't have the karma to inline react yet, so I have to point out this typo as a comment:
this series run the gamut
Should either be "these series run the gamut" or "this series runs the gamut".
the big problem in learning this valuable way of thinking was figuring out how to practice it, not knowing the theory.
This post is one of my very first forays into Less Wrong, and this resonates with me deeply. I recently finished The Demon-Haunted World by Carl Sagan and felt this there, as well. He does so much to extoll the scientific method without directing readers how to exercise the discipline. Acknowledging that writing a primer on rational thought was probably not one of his goals, it still left me wondering where modern people ought to turn for such a primer.
How does one get in the reps?
I have a suspicion this might be the place, although the barriers to entry seem a little intimidating. This comment is my first attempt at dipping my toes in to see how the water feels. This post was the first one that gave me some confidence to do so.
I really enjoyed reading this as a thorough examination of the author's own experience with exploring metacognition with Claude. I struggled with some of the fundamental points, especially those that seemed to explicitly anthropomorphize LLMs' cognitive function, e.g. the comparison to caregivers reinforcing a baby's behavior by their own interpretation of the baby's behavior.
In spite of the obvious parallels between evolutionary psychology and training, this anthropomorphization runs the risk of clouding how we might interpret behavior and cognition objectively and separate from the way a human brain and mind may work.
This sort of analysis feels like perhaps a productive extension of priors that LLMs may be human-esque in their cognition. And certainly our subjective experiences with them tend to reinforce those priors, especially when they produce surprising output.
But if you set aside the idea that something that produces output that is human-like ought to function cognitively like a human and in no other way, then the arguments built on top of those priors start to get a little shaky.
I don't at present have another hypothesis to present, but I'm generally, at the moment, wary of casual comparisons between biological intelligence and LLMs.