Let's suppose we succeed in aligning a super-intelligence. We should expect that the super-intelligence will be able to provide a pretty good estimate of how impactful various people's actions were. So maybe there are some people toiling away on AI Safety who feel sad that their efforts aren't being recognised. I guess what I'm saying is that if we succeed you will be. I'm hoping that at least some people will find this encouraging.

2 comments, sorted by Click to highlight new comments since: Today at 3:22 PM
New Comment

Is that like when Dr. Who said in nine hundred years he's never met anyone who wasn't important

I think most people can make a difference if they really want to and if they're willing to set aside their ego and self-interest[1]. Of course, this would probably require a lot of hard work and some painful admissions about their own zone of competence.

  1. ^

    I don't mean to imply that I am completely successful in this myself.

New to LessWrong?