Uncovering Latent Human Wellbeing in LLM Embeddings — LessWrong