In this post, I lay out my alignment research agenda, and give reasons why I think people should engage with it. I'll be editing this post after I put it up, so don't be surprised if it changes under you after you comment, especially if I find your comment useful and insightful.
The steps to building an aligned superintelligence, in my mind, are as follows:
The components I envisage needing to be built are: