Can LLMs Simulate Internal Evaluation? A Case Study in Self-Generated Recommendations
Author: Yap Chan Chen Independent Researcher, Malaysia chanchen83@hotmail.com Abstract: This post presents an exploratory study into how large language models (LLMs) respond to prompts that invite them to simulate evaluative judgment about the user—beyond factual response or task completion. Through long-form dialogue with ChatGPT, DeepSeek, and Gemini, I investigate whether...