Crossposted from the AI Alignment Forum. May contain more technical jargon than usual.

I'm not sure this is on-topic for this forum -- if it's too far from the forum's purpose, let me know and I'll take it down!

I've recently published an introduction to research on superintelligence risk, with the aim of making it easier for students to get into this area. It can be found here:

Three areas of research on the superintelligence control problem

I'd love to hear any comments from people on this forum on the content and format -- you're all researchers who have gotten involved in this area, so your impressions are important!

New to LessWrong?

New Comment