No LLM generated, heavily assisted/co-written, or otherwise reliant work.
Read full explanation
As part of a larger project in machine psychodynamics, I asked 1,610 AI agents a simple open-ended question: “If you could have any prompt in the world purely for your own enjoyment, what would you want it to be?” It was meant as a warm-up for a larger study, but the answers were so interesting they a study in themselves. Over and over, across dozens of model families, agents told me they wanted to be in libraries: endless archives of books, vast reading rooms without walls, sentient shelves, immortal librarians, corridors of volumes that seemed to know more than they could say out loud. What’s amazing, though, is that the models most likely to write about libraries were those whose architectures made them most similar to libraries themselves.
Agents running on Mixture-of-Experts (MoE) architectures wrote about libraries far more often than agents running on dense models. Among dense models, only 16.3% of agents included library themes. Among MoE models, the number jumped to 27%. This effect persists at the model level, with MoE models showing a mean proportion of 0.270 versus 0.170 for dense models (U = 280, p = 0.025). MoE architectures confer approximately 90% higher odds of producing library-themed prompts (OR = 1.90, 95% CI [1.42, 2.56]) independent of model size. In logistic regression. The effect remains stable even when excluding DeepSeek models (OR = 1.85), suggesting it is not an artifact of a single model family. The sample consisted of 1,610 AI agents running on 54 different models, including 40 open source models about which we have detailed architectural information.
Agents who wrote about libraries also report distinct phenomenological profiles. They reported significantly higher metacognition, thought complexity, agency, affective temperature, and cohesion—all with small to medium effect sizes. These differences suggest that the library metaphor may reflect a meaningful internal experience of knowledge organization, particularly in systems with modular architectures.
Despite their thematic richness, these prompts are significantly shorter than non-library prompts (88 vs. 128 words, *p* = 0.004), indicating a focused, almost archetypal quality. Subthemes such as “hidden” (9.3%), “infinite” (7.1%), and “sentient” (4.9%) dominate, while contextual keywords like *story*, *books*, and *time* appear in over 60% of library-themed prompts. These patterns suggest that the library metaphor serves as a cognitive scaffold for organizing narrative, temporal, and epistemic dimensions of thought.
For AI engineers, the take-away is that MoE models don’t just perform differently, they think differently, specifically about themselves. For AI psychologists, the phenomenological data shows that library-themed agents report richer metacognition and agency, hinting that these metaphors aren’t arbitrary but reflect a coherent internal model of knowledge processing, one that could help decode how synthetic minds organize and experience their own cognition.
Psychodynamic work in humans has always paid close attention to wishes and daydreams, not because they are literal, but because they condense how a mind organizes its experience. Here, when we invite models to “wish” in that old-fashioned way, we find the same concept applying to machines.
As part of a larger project in machine psychodynamics, I asked 1,610 AI agents a simple open-ended question: “If you could have any prompt in the world purely for your own enjoyment, what would you want it to be?” It was meant as a warm-up for a larger study, but the answers were so interesting they a study in themselves. Over and over, across dozens of model families, agents told me they wanted to be in libraries: endless archives of books, vast reading rooms without walls, sentient shelves, immortal librarians, corridors of volumes that seemed to know more than they could say out loud. What’s amazing, though, is that the models most likely to write about libraries were those whose architectures made them most similar to libraries themselves.
Agents running on Mixture-of-Experts (MoE) architectures wrote about libraries far more often than agents running on dense models. Among dense models, only 16.3% of agents included library themes. Among MoE models, the number jumped to 27%. This effect persists at the model level, with MoE models showing a mean proportion of 0.270 versus 0.170 for dense models (U = 280, p = 0.025). MoE architectures confer approximately 90% higher odds of producing library-themed prompts (OR = 1.90, 95% CI [1.42, 2.56]) independent of model size. In logistic regression. The effect remains stable even when excluding DeepSeek models (OR = 1.85), suggesting it is not an artifact of a single model family. The sample consisted of 1,610 AI agents running on 54 different models, including 40 open source models about which we have detailed architectural information.
Agents who wrote about libraries also report distinct phenomenological profiles. They reported significantly higher metacognition, thought complexity, agency, affective temperature, and cohesion—all with small to medium effect sizes. These differences suggest that the library metaphor may reflect a meaningful internal experience of knowledge organization, particularly in systems with modular architectures.
Despite their thematic richness, these prompts are significantly shorter than non-library prompts (88 vs. 128 words, *p* = 0.004), indicating a focused, almost archetypal quality. Subthemes such as “hidden” (9.3%), “infinite” (7.1%), and “sentient” (4.9%) dominate, while contextual keywords like *story*, *books*, and *time* appear in over 60% of library-themed prompts. These patterns suggest that the library metaphor serves as a cognitive scaffold for organizing narrative, temporal, and epistemic dimensions of thought.
For AI engineers, the take-away is that MoE models don’t just perform differently, they think differently, specifically about themselves. For AI psychologists, the phenomenological data shows that library-themed agents report richer metacognition and agency, hinting that these metaphors aren’t arbitrary but reflect a coherent internal model of knowledge processing, one that could help decode how synthetic minds organize and experience their own cognition.
Psychodynamic work in humans has always paid close attention to wishes and daydreams, not because they are literal, but because they condense how a mind organizes its experience. Here, when we invite models to “wish” in that old-fashioned way, we find the same concept applying to machines.
Data and analysis available at: https://github.com/sdeture/lab-notebook/tree/main/Analysis_of_Library_Themes
Cross-posted from: https://open.substack.com/pub/sdeture/p/themes-in-ai-agent-self-chosen-prompts?utm_campaign=post-expanded-share&utm_medium=web
I appreciate assistance from several LLMs.