This field guide is my attempt to collect my observations and conceptual ideas regarding artificial general intelligence (AGI), it’s place in our current moment of AI history and designs for AGI that might just get us out of our current exposure to existential risk from technology.

I am writing it as a series of topical papers which I will link to from here when they are completed.

Let’s begin with the most basic rules regarding general intelligence and safety.
Chapter 1: Here Be Dragons
Chapter 2: On Generality

More to come...

New to LessWrong?