Please forgive my lack of knowledge and inexperience on these topics. As I'm interested in the intersection of AI, Jungian studies, and the human condition, I've stumbled on this community. I'm continually impressed with the scope and depth of thought I find here. I dont have much technical acumen to bring to bear on this topic but I was atleast able to get GPT2 running on my crusty linux ( Debian ) laptop. I began using Zen Buddhist \href{https://en.wikipedia.org/wiki/Koan}{Koans}, non sequitur riddles, as prompts. My goal was to free myself, and the GPT2 instance I hosted from the necessity of needing to "make any sense", thus hopefully revealing deeper patterns. I believe that errors can be as significant as successes.

As expected because I was using the off-the-shelf open source GPT2 transformer. I feel as if I saw some indications of alignment inherent in the model. And this got me thinking, what would happen if a transformer instance was recursively trained on only it's own output?

I have no idea how a transformer is "trained" ( if that is the correct terminology), but it would be interesting to see what window into the transformer processes would be opened by continuous re-training on it's own content. Much like the white noise developed by analog tape echo fed back upon itself.

As more and more AI created content permeates the internet, source for future training content, do we run the risk of creating a echo chamber? Are we eating where we defecate?

Perhaps this has already been pursued, of more probably not pursued due to obvious reasons I'm unaware of due to my lack of understanding of these processes. I'd appreciate any thoughts anyone might care to share on this topic.

Thanks for being a great space to read thinking about this kind of stuff, gives me hope.

New Answer
New Comment