The number of people who deeply understand superintelligence risk is far too small. There's a growing pipeline of people entering AI Safety, but most of the available onboarding covers the field broadly, touching on many topics without going deep on the parts we think matter most. People come out having been exposed to AI Safety ideas, but often can't explain why alignment is genuinely hard, or think strategically about what to work on. We think the gap between "I've heard of AI Safety" and "I understand why this might end everything, and can articulate it" is one of the most important gaps to close.
We started Lens Academy to close that gap. Lens Academy is a free, nonprofit AI Safety education platform focused specifically on misaligned superintelligence: why...
Mentorship is one of the most frequently requested services that AI Safety Quest sees when conducting Navigation calls. I hope this service can help bridge the gap between "I want to do something about AI safety" and "I'm working on a meaningful AIS project". Many thanks to you both for making this happen.
Thanks for putting this together! Two suggestions:
Thanks for another thought provoking post. This is quite timely for me, as I've been thinking a lot about the difference between the work of futurists as compared to forecasters.
...These are people who thought a lot about science and the future, and made lots of predictions about future technologies - but they're famous for how entertaining their fiction was at the time, not how good their nonfiction predictions look in hindsight. I selected them by vaguely remembering that "the Big Three of science fiction" is a thing people say sometimes, googling it,
Your site is the single most commonly shared resource by Navigators at AI Safety Quest. We honestly could structure about a third of our calls as a guided walkthrough of this site. I think the enhanced discoverability will be a huge step forward.