Upon reflection, the one thing I don't love about this post is the use of the word "newbies". I don't have deep technical chops, but I've been around the AI safety space since 2023 and I've completed the BlueDot technical curriculum in 2023 and 2024 (when it was still close to Richard Ngo's original). I still found IABIED clarified and reinforced several important intuitions about the likely path ASI will take. I wish we had found a better term that could act as a scalpel to cleave the difference between things that help grow technical skills and things that help grow fundamental intuitions.
Similarly: when you're deep in the trees, it can be extremely valuable to review the map of the forest.
Ha ha ha. I was strongly tempted to add a softener to Luc's phrase. I wanted a subtitle like "Technically If everyone reads it, everyone doesn't die (all at once), but it doesn't have the same ring, does it?"
"If everyone reads it and internalises the message, the marginal chance of everyone dying in a coordinated ASI related incident decreases relative to a hypothetical alternative which we [the course facilitators] do not observe but can hypothesise about with relatively high confidence"
I think you should hire me to do marketing
tl;dr: Lens Academy offers a new course introducing ASI x-risk for AI safety newcomers, centered around the book IABIED. We share our hypothesis of why IABIED seems more appreciated by AI Safety newbies than by AI Safety insiders.
Lens Academy's new intro course uses IABIED to teach newbies about ASI x-risk
Lens Academy is launching "Superintelligence 101"[1], a 6-week introductory course covering existential risks from misaligned artificial superintelligence (ASI x-risk) using the book If Anyone Builds It, Everyone Dies (IABIED), plus 1-on-1 AI Tutoring and extra resources[2] on our platform to engage with key claims.[3] Each week ends with a facilitated group meeting.
Anyone can enroll, and everyone is accepted. We're set up to be highly scalable, so we don't reject any applications. In, fact, we don't even have applications.
Sign-up here as a participant or navigator (facilitator): https://lensacademy.org/enroll (and share this link with anyone in your network who might be interested in courses on superintelligence risk)
Teaching ASI x-risk to AI safety newcomers is different from teaching to insiders:
1. Good resources explaining ASI x-risk barely exist
When creating our first course (Navigating Superintelligence), we repeatedly ran into the problem that for most of the learning outcomes we wanted to achieve, there are very few good, easily understandable resources out there.[4]
In many such cases, the best introductory resources are chapters from IABIED.
2. IABIED seems to be pretty successful in convincing newbies to worry about AI x-risk.
A few datapoints:
3. IABIED seems less successful at convincing AI safety insiders that alignment is hard
The book does not convince AI safety insiders of the "MIRI / ASI x-risk / alignment-is-hard" point of view, compared to how well it seems to land with the general population (i.e. AI safety newbies).
Insiders don't like IABIED because it wasn't written for them
We think IABIED doesn't resonate with insiders because it was written for a general audience. As a general audience work:
This seems to lead to the split where AI safety insiders don't care much for the book – nor have a particularly high esteem of it – whereas it gets AI safety outsiders to care about the core of the alignment problem more reliably than by any other resource we know of.
However, there are far too few AI safety insiders in the world. A broad-based change in the way the public sees and talks to their representatives about AI is needed to avert disaster. Because of these observations, we claim:
If Everyone Reads It, Nobody Dies.
Sign-up here as a participant or navigator (facilitator): https://lensacademy.org/enroll (and share this link with anyone in your network who might be interested in courses on superintelligence risk)
The name of the course is likely to change a couple of times in the coming months.
from both ifanyonebuildsit.com and many other resources
When most people read a non-fiction book and "important stuff", they talk about it with network for a week or two and then move on to the next attention-grabbing topic. Since we think AI safety is much more important than the next attention-grabbing topic, we would like to help readers solidify the material and take steps toward getting involved.
Such as article and videos.
There are a lot of extra resources online on ifanyonebuildsit.com, but we suspect barely anyone reading the book will look up those resources; especially if they disagree with the claims made in the book.