What are some scenarios where an aligned AGI actually helps humanity, but many/most people don't like it? — LessWrong