This post was rejected for the following reason(s):
No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).
Our LLM-generated content policy can be viewed here.
No Basic LLM Case Studies. We get lots of new users submitting case studies of conversations with LLMs, prompting them into different modalities. We reject these because:
- The content is almost always very similar.
- Usually, the user is incorrect about how novel/interesting their case study is (i.e. it's pretty easy to get LLMs into various modes of conversation or apparent awareness/emergence, and not actually strong evidence of anything interesting)
- Most of these situations seem like they are an instance of Parasitic AI.
We haven't necessarily reviewed your case in detail but since we get multiple of these per day, alas, we don't have time to do so.