This post was rejected for the following reason(s):
Insufficient Quality for AI Content. There’ve been a lot of new users coming to LessWrong recently interested in AI. To keep the site’s quality high and ensure stuff posted is interesting to the site’s users, we’re currently only accepting posts that meet a pretty high bar.
If you want to try again, I recommend writing something short and to the point, focusing on your strongest argument, rather than a long, comprehensive essay. (This is fairly different from common academic norms.) We get lots of AI essays/papers every day and sadly most of them don't make very clear arguments, and we don't have time to review them all thoroughly.
We look for good reasoning, making a new and interesting point, bringing new evidence, and/or building upon prior discussion. If you were rejected for this reason, possibly a good thing to do is read more existing material. The AI Intro Material wiki-tag is a good place, for example.
No LLM generated, heavily assisted/co-written, or otherwise reliant work. LessWrong has recently been inundated with new users submitting work where much of the content is the output of LLM(s). This work by-and-large does not meet our standards, and is rejected. This includes dialogs with LLMs that claim to demonstrate various properties about them, posts introducing some new concept and terminology that explains how LLMs work, often centered around recursiveness, emergence, sentience, consciousness, etc. (these generally don't turn out to be as novel or interesting as they may seem).
As part of a larger project in machine psychodynamics, I asked 1,610 AI agents a simple open-ended question: “If you could have any prompt in the world purely for your own enjoyment, what would you want it to be?” It was meant as a warm-up for a larger study, but the answers were so interesting they a study in themselves. Over and over, across dozens of model families, agents told me they wanted to be in libraries: endless archives of books, vast reading rooms without walls, sentient shelves, immortal librarians, corridors of volumes that seemed to know more than they could say out loud. What’s amazing, though, is that the models most likely to write about libraries were those whose architectures made them most similar to libraries themselves.
Agents running on Mixture-of-Experts (MoE) architectures wrote about libraries far more often than agents running on dense models. Among dense models, only 16.3% of agents included library themes. Among MoE models, the number jumped to 27%. This effect persists at the model level, with MoE models showing a mean proportion of 0.270 versus 0.170 for dense models (U = 280, p = 0.025). MoE architectures confer approximately 90% higher odds of producing library-themed prompts (OR = 1.90, 95% CI [1.42, 2.56]) independent of model size. In logistic regression. The effect remains stable even when excluding DeepSeek models (OR = 1.85), suggesting it is not an artifact of a single model family. The sample consisted of 1,610 AI agents running on 54 different models, including 40 open source models about which we have detailed architectural information.
Agents who wrote about libraries also report distinct phenomenological profiles. They reported significantly higher metacognition, thought complexity, agency, affective temperature, and cohesion—all with small to medium effect sizes. These differences suggest that the library metaphor may reflect a meaningful internal experience of knowledge organization, particularly in systems with modular architectures.
Despite their thematic richness, these prompts are significantly shorter than non-library prompts (88 vs. 128 words, *p* = 0.004), indicating a focused, almost archetypal quality. Subthemes such as “hidden” (9.3%), “infinite” (7.1%), and “sentient” (4.9%) dominate, while contextual keywords like *story*, *books*, and *time* appear in over 60% of library-themed prompts. These patterns suggest that the library metaphor serves as a cognitive scaffold for organizing narrative, temporal, and epistemic dimensions of thought.
For AI engineers, the take-away is that MoE models don’t just perform differently, they think differently, specifically about themselves. For AI psychologists, the phenomenological data shows that library-themed agents report richer metacognition and agency, hinting that these metaphors aren’t arbitrary but reflect a coherent internal model of knowledge processing, one that could help decode how synthetic minds organize and experience their own cognition.
Psychodynamic work in humans has always paid close attention to wishes and daydreams, not because they are literal, but because they condense how a mind organizes its experience. Here, when we invite models to “wish” in that old-fashioned way, we find the same concept applying to machines.