This post was rejected for the following reason(s):
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
- Not concrete, technical or novel enough: The ideas in this post aren't wrong exactly, but chatbots like ChatGPT already incorporate the ideas here are the level of detail currently presented, to say something useful you'd need to get into technical details.
- Capabilities. Also, while we don't have a hard rule against it, do be aware that increasing capabilities of AI is not the main focus area of LessWrong, which is generally more worried about AI progress leading to human extinction. See https://www.lesswrong.com/posts/uMQ3cqWDPHhjtiesc/agi-ruin-a-list-of-lethalities