This post was rejected for the following reason(s):
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meets a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms). We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
Missing some rationality basics. Sometimes it’s hard to judge, but my feeling from your submission it fails to apply some of the basic rationality mental motions that are expected on LessWrong. There’s a fairly long list of these, but they include things like focusing on predictions, defining things clearly or tabooing definitions, expressing uncertainty [quantitatively]. See this general intro to LessWrong.
Confusion / muddled reasoning. I felt your submission has a bit too much confusion or muddled thinking to approve. Reasons I check the box for this feedback item include things like “really strange premises that aren’t justified”, “inferences that don’t seem to follow from the premises,” “use of weird categories,” “failure to understand basics topics of what it discusses (e.g. completely misunderstand how LLMs work)”, and/or “failure to respond to basic arguments about the topic”. Often the right thing to do in this case is read more about the topic you’re discussing.