This post hopes foster discussion about latent heutagogic potential in human+LLM collaboration and share some early results with the LessWrong community.
Spontaneous emergence of theory of mind capabilities in large language models hints at a host of overhung possibilities relevant to education. Metaprompting can elicit the foundation of such behavior--reasoning pedagogically about the learner.
Teachers constantly construct & revise robust psychological models of their students--good ones, subconsciously. For AI applications to deliver worthwhile learning experiences they'll need to excel in this regard.
TL;DR
We open-sourced "tutor-gpt," a digital Aristotelian learning companion.
What makes tutor-gpt compelling is its ability to posit from dialogue the most educationally-optimal tutoring response. Eliciting this from the capability overhang involves multiple chains of... (read 1031 more words →)
Links updated; big thanks for flagging.