LLM Alignment Experiment: Effect of roles and optimism on probabilities
I ran a mini experiment to see how LLMs assign probabilities in response to different prompts. TL;DR: Their probability estimates shift depending on roles, keywords, and how you prompt them within a thread. What I Did This was the prompt with Gpt-4o I used with a placeholder for the 4...